799 resultados para 160701 Clinical Social Work Practice
Resumo:
Relatively little information has been reported about foot and ankle problems experienced by nurses, despite anecdotal evidence which suggests they are common ailments. The purpose of this study was to improve knowledge about the prevalence of foot and ankle musculoskeletal disorders (MSDs) and to explore relationships between these MSDs and proposed risk factors. A review of the literature relating to work-related MSDs, MSDs in nursing, foot and lower-limb MSDs, screening for work-related MSDs, foot discomfort, footwear and the prevalence of foot problems in the community was undertaken. Based on the review, theoretical risk factors were proposed that pertained to the individual characteristics of the nurses, their work activity or their work environment. Three studies were then undertaken. A cross-sectional survey of 304 nurses, working in a large tertiary paediatric hospital, established the prevalence of foot and ankle MSDs. The survey collected information about self-reported risk factors of interest. The second study involved the clinical examination of a subgroup of 40 nurses, to examine changes in body discomfort, foot discomfort and postural sway over the course of a single work shift. Objective measurements of additional risk factors, such as individual foot posture (arch index) and the hardness of shoe midsoles, were performed. A final study was used to confirm the test-retest reliability of important aspects of the survey and key clinical measurements. Foot and ankle problems were the most common MSDs experienced by nurses in the preceding seven days (42.7% of nurses). They were the second most common MSDs to cause disability in the last 12 months (17.4% of nurses), and the third most common MSDs experienced by nurses in the last 12 months (54% of nurses). Substantial foot discomfort (Visual Analogue Scale (VAS) score of 50mm or more) was experienced by 48.5% of nurses at sometime in the last 12 months. Individual risk factors, such as obesity and the number of self-reported foot conditions (e.g., callouses, curled toes, flat feet) were strongly associated with the likelihood of experiencing foot problems in the last seven days or during the last 12 months. These risk factors showed consistent associations with disabling foot conditions and substantial foot discomfort. Some of these associations were dependent upon work-related risk factors, such as the location within the hospital and the average hours worked per week. Working in the intensive care unit was associated with higher odds of experiencing foot problems within the last seven days, foot problems in the last 12 months and foot problems that impaired activity in the last 12 months. Changes in foot discomfort experienced within a day, showed large individual variability. Fifteen of the forty nurses experienced moderate/substantial foot discomfort at the end of their shift (VAS 25+mm). Analysis of the association between risk factors and moderate/substantial foot discomfort revealed that foot discomfort was less likely for nurses who were older, had greater BMI or had lower foot arches, as indicated by higher arch index scores. The nurses’ postural sway decreased over the course of the work shift, suggesting improved body balance by the end of the day. These findings were unexpected. Further clinical studies examining individual nurses on several work shifts are needed to confirm these results, particularly due to the small sample size and the single measurement occasion. There are more than 280,000 nurses registered to practice in Australia. The nursing workforce is ageing and the prevalence of foot problems will increase. If the prevalence estimates from this study are extrapolated to the profession generally, more than 70,000 hospital nurses have experienced substantial foot discomfort and 25-30,000 hospital nurses have been limited in their activity due to foot problems during the last 12 months. Nurses with underlying foot conditions were more likely to report having foot problems at work. Strategies to prevent or manage foot conditions exist and they should be disseminated to nurses. Obesity is a significant risk factor for foot and ankle MSDs and these nurses may need particular assistance to manage foot problems. The risk of foot problems for particular groups of nurses, e.g. obese nurses, may vary depending upon the location within the hospital. Further research is needed to confirm the findings of this study. Similar studies should be conducted in other occupational groups that require workers to stand for prolonged periods.
Resumo:
his case study aims to describe how general parenting principles can be used as part of parent-led, family-focused child weight management that is in line with current Australian Clinical Practice Guidelines. A parent-led, family-focused child weight management program was designed for use by dietitians with parents of young children (five- to nine-year-olds). The program utilises the cornerstones of overweight treatment: diet, activity, behaviour modification and family support delivered in an age-appropriate, family-focused manner. Parents participate in 16 sessions (4 parenting-focused, 8 lifestyle-focused and 4 individual telephone support calls) conducted weekly, fortnightly then monthly over six months. This case study illustrates how a family used the program, resulting in reduced degree of overweight and stabilised waist circumference in the child over 12 months. In conclusion, linking parenting skills to healthy family lifestyle education provides an innovative approach to family-focused child weight management. It addresses key Australian Clinical Practice Guidelines, works at the family level, and provides a means for dietitians to easily adopt age-appropriate behaviour modification as part of their practice.
Resumo:
Purpose: This paper aims to show that identification of expectations and software functional requirements via consultation with potential users is an integral component of the development of an emergency department patient admissions prediction tool. ---------- Design/methodology/approach: Thematic analysis of semi-structured interviews with 14 key health staff delivered rich data regarding existing practice and future needs. Participants included emergency department staff, bed managers, nurse unit managers, directors of nursing, and personnel from health administration. ---------- Findings: Participants contributed contextual insights on the current system of admissions, revealing a culture of crisis, imbued with misplayed communication. Their expectations and requirements of a potential predictive tool provided strategic data that moderated the development of the Emergency Department Patient Admissions Prediction Tool, based on their insistence that it feature availability, reliability and relevance. In order to deliver these stipulations, participants stressed that it should be incorporated, validated, defined and timely. ---------- Research limitations/implications: Participants were envisaging a concept and use of a tool that was somewhat hypothetical. However, further research will evaluate the tool in practice. ---------- Practical implications: Participants' unsolicited recommendations regarding implementation will not only inform a subsequent phase of the tool evaluation, but are eminently applicable to any process of implementation in a healthcare setting. ---------- Originality/value: The consultative process engaged clinicians and the paper delivers an insider view of an overburdened system, rather than an outsider's observations.
Resumo:
Executive summary Objective: The aims of this study were to identify the impact of Pandemic (H1N1) 2009 Influenza on Australian Emergency Departments (EDs) and their staff, and to inform planning, preparedness, and response management arrangements for future pandemics, as well as managing infectious patients presenting to EDs in everyday practice. Methods This study involved three elements: 1. The first element of the study was an examination of published material including published statistics. Standard literature research methods were used to identify relevant published articles. In addition, data about ED demand was obtained from Australian Government Department of Health and Ageing (DoHA) publications, with several state health departments providing more detailed data. 2. The second element of the study was a survey of Directors of Emergency Medicine identified with the assistance of the Australasian College for Emergency Medicine (ACEM). This survey retrieved data about demand for ED services and elicited qualitative comments on the impact of the pandemic on ED management. 3. The third element of the study was a survey of ED staff. A questionnaire was emailed to members of three professional colleges—the ACEM; the Australian College of Emergency Nursing (ACEN); and the College of Emergency Nursing Australasia (CENA). The overall response rate for the survey was 18.4%, with 618 usable responses from 3355 distributed questionnaires. Topics covered by the survey included ED conditions during the (H1N1) 2009 influenza pandemic; information received about Pandemic (H1N1) 2009 Influenza; pandemic plans; the impact of the pandemic on ED staff with respect to stress; illness prevention measures; support received from others in work role; staff and others’ illness during the pandemic; other factors causing ED staff to miss work during the pandemic; and vaccination against Pandemic (H1N1) 2009 Influenza. Both qualitative and quantitative data were collected and analysed. Results: The results obtained from Directors of Emergency Medicine quantifying the impact of the pandemic were too limited for interpretation. Data sourced from health departments and published sources demonstrated an increase in influenza-like illness (ILI) presentations of between one and a half and three times the normal level of presentations of ILIs. Directors of Emergency Medicine reported a reasonable level of preparation for the pandemic, with most reporting the use of pandemic plans that translated into relatively effective operational infection control responses. Directors reported a highly significant impact on EDs and their staff from the pandemic. Growth in demand and related ED congestion were highly significant factors causing distress within the departments. Most (64%) respondents established a ‘flu clinic’ either as part of Pandemic (H1N1) 2009 Influenza Outbreak in Australia: Impact on Emergency Departments. the ED operations or external to it. They did not note a significantly higher rate of sick leave than usual. Responses relating to the impact on staff were proportional to the size of the colleges. Most respondents felt strongly that Pandemic (H1N1) 2009 Influenza had a significant impact on demand in their ED, with most patients having low levels of clinical urgency. Most respondents felt that the pandemic had a negative impact on the care of other patients, and 94% revealed some increase in stress due to lack of space for patients, increased demand, and filling staff deficits. Levels of concern about themselves or their family members contracting the illness were less significant than expected. Nurses displayed significantly higher levels of stress overall, particularly in relation to skill-mix requirements, lack of supplies and equipment, and patient and patients’ family aggression. More than one-third of respondents became ill with an ILI. Whilst respondents themselves reported taking low levels of sick leave, respondents cited difficulties with replacing absent staff. Ranked from highest to lowest, respondents gained useful support from ED colleagues, ED administration, their hospital occupational health department, hospital administration, professional colleges, state health department, and their unions. Respondents were generally positive about the information they received overall; however, the volume of information was considered excessive and sometimes inconsistent. The media was criticised as scaremongering and sensationalist and as being the cause of many unnecessary presentations to EDs. Of concern to the investigators was that a large proportion (43%) of respondents did not know whether a pandemic plan existed for their department or hospital. A small number of staff reported being redeployed from their usual workplace for personal risk factors or operational reasons. As at the time of survey (29 October –18 December 2009), 26% of ED staff reported being vaccinated against Pandemic (H1N1) 2009 Influenza. Of those not vaccinated, half indicated they would ‘definitely’ or ‘probably’ not get vaccinated, with the main reasons being the vaccine was ‘rushed into production’, ‘not properly tested’, ‘came out too late’, or not needed due to prior infection or exposure, or due to the mildness of the disease. Conclusion: Pandemic (H1N1) 2009 Influenza had a significant impact on Australian Emergency Departments. The pandemic exposed problems in existing plans, particularly a lack of guidelines, general information overload, and confusion due to the lack of a single authoritative information source. Of concern was the high proportion of respondents who did not know if their hospital or department had a pandemic plan. Nationally, the pandemic communication strategy needs a detailed review, with more engagement with media networks to encourage responsible and consistent reporting. Also of concern was the low level of immunisation, and the low level of intention to accept vaccination. This is a problem seen in many previous studies relating to seasonal influenza and health care workers. The design of EDs needs to be addressed to better manage infectious patients. Significant workforce issues were confronted in this pandemic, including maintaining appropriate staffing levels; staff exposure to illness; access to, and appropriate use of, personal protective equipment (PPE); and the difficulties associated with working in PPE for prolonged periods. An administrative issue of note was the reporting requirement, which created considerable additional stress for staff within EDs. Peer and local support strategies helped ensure staff felt their needs were provided for, creating resilience, dependability, and stability in the ED workforce. Policies regarding the establishment of flu clinics need to be reviewed. The ability to create surge capacity within EDs by considering staffing, equipment, physical space, and stores is of primary importance for future pandemics.
Resumo:
The nature of the relationship that is negotiated, developed and maintained between a clinical supervisor and supervisee is central to effectively engage in clinical work, to promote professional and personal development, and to ensure consistent ethical practice. In this chapter attention is given to the challenges, importance and benefits of the supervisory relationship. The ability to form and sustain relationships in supervision and in clinical practice is more crucial than specific knowledge and therapeutic skills (Dye, 2004). Attention to parallel process, the working alliance, multiple roles, expectations and acculturative issues are addressed. This is an introduction to some of the most salient issues concerning the supervisory relationship and is a review of concepts and processes discussed in greater depth throughout this textbook. The reader is encouraged to utilise the references and suggested readings to deepen their understanding of the supervisory relationship.
Resumo:
This paper reports on the experience of undergraduate speech–language pathology students at one university chosen for the implementation stage of the Palliative Care Curriculum for Undergraduates (PCC4U) Project. Funded by a government department for health and ageing through a national palliative care programme, the project was managed by a team of researchers from the discipline of nursing. The PCC4U project championed the inclusion of palliative care education as an integral part of medical, nursing, and allied healthcare undergraduate training. Of the pilot sites chosen for the PCC4U project, only one site, reported here, included both speech–language pathology and social work disciplines, providing an important opportunity for interdisciplinary collaboration on novel curriculum development in an area of mutual interest. This synergy served as an excellent foundation for ongoing opportunities for interdisciplinary teaching and learning in the university. Speech–language pathology students reported that the project was an invaluable addition to their education and preparation for clinical practice.
Resumo:
Objective Uterine Papillary Serous Carcinoma (UPSC) is uncommon and accounts for less than 5% of all uterine cancers. Therefore the majority of evidence about the benefits of adjuvant treatment comes from retrospective case series. We conducted a prospective multi-centre non-randomized phase 2 clinical trial using four cycles of adjuvant paclitaxel plus carboplatin chemotherapy followed by pelvic radiotherapy, in order to evaluate the tolerability and safety of this approach. Methods This trial enrolled patients with newly diagnosed, previously untreated patients with stage 1b-4 (FIGO-1988) UPSC with a papillary serous component of at least 30%. Paclitaxel (175 mg/m2) and carboplatin (AUC 6) were administered on day 1 of each 3-week cycle for 4 cycles. Chemotherapy was followed by external beam radiotherapy to the whole pelvis (50.4 Gy over 5.5 weeks). Completion and toxicity of treatment (Common Toxicity Criteria, CTC) and quality of life measures were the primary outcome indicators. Results Twenty-nine of 31 patients completed treatment as planned. Dose reduction was needed in 9 patients (29%), treatment delay in 7 (23%), and treatment cessation in 2 patients (6.5%). Hematologic toxicity, grade 3 or 4 occurred in 19% (6/31) of patients. Patients' self-reported quality of life remained stable throughout treatment. Thirteen of the 29 patients with stages 1–3 disease (44.8%) recurred (average follow up 28.1 months, range 8–60 months). Conclusion This multimodal treatment is feasible, safe and tolerated reasonably well and would be suitable for use in multi-institutional prospective randomized clinical trials incorporating novel therapies in patients with UPSC.
Resumo:
Background: Clinical practice and clinical research has made a concerted effort to move beyond the use of clinical indicators alone and embrace patient focused care through the use of patient reported outcomes such as healthrelated quality of life. However, unless patients give consistent consideration to the health states that give meaning to measurement scales used to evaluate these constructs, longitudinal comparison of these measures may be invalid. This study aimed to investigate whether patients give consideration to a standard health state rating scale (EQ-VAS) and whether consideration of good and poor health state descriptors immediately changes their selfreport. Methods: A randomised crossover trial was implemented amongst hospitalised older adults (n = 151). Patients were asked to consider descriptions of extremely good (Description-A) and poor (Description-B) health states. The EQ-VAS was administered as a self-report at baseline, after the first descriptors (A or B), then again after the remaining descriptors (B or A respectively). At baseline patients were also asked if they had considered either EQVAS anchors. Results: Overall 106/151 (70%) participants changed their self-evaluation by ≥5 points on the 100 point VAS, with a mean (SD) change of +4.5 (12) points (p < 0.001). A total of 74/151 (49%) participants did not consider the best health VAS anchor, of the 77 who did 59 (77%) thought the good health descriptors were more extreme (better) then they had previously considered. Similarly 85/151 (66%) participants did not consider the worst health anchor of the 66 who did 63 (95%) thought the poor health descriptors were more extreme (worse) then they had previously considered. Conclusions: Health state self-reports may not be well considered. An immediate significant shift in response can be elicited by exposure to a mere description of an extreme health state despite no actual change in underlying health state occurring. Caution should be exercised in research and clinical settings when interpreting subjective patient reported outcomes that are dependent on brief anchors for meaning. Trial Registration: Australian and New Zealand Clinical Trials Registry (#ACTRN12607000606482) http://www.anzctr. org.au
Resumo:
Rationale, aims and objectives: Patient preference for interventions aimed at preventing in-hospital falls has not previously been investigated. This study aims to contrast the amount patients are willing to pay to prevent falls through six intervention approaches. ----- ----- Methods: This was a cross-sectional willingness-to-pay (WTP), contingent valuation survey conducted among hospital inpatients (n = 125) during their first week on a geriatric rehabilitation unit in Queensland, Australia. Contingent valuation scenarios were constructed for six falls prevention interventions: a falls consultation, an exercise programme, a face-to-face education programme, a booklet and video education programme, hip protectors and a targeted, multifactorial intervention programme. The benefit to participants in terms of reduction in risk of falls was held constant (30% risk reduction) within each scenario. ----- ----- Results: Participants valued the targeted, multifactorial intervention programme the highest [mean WTP (95% CI): $(AUD)268 ($240, $296)], followed by the falls consultation [$215 ($196, $234)], exercise [$174 ($156, $191)], face-to-face education [$164 ($146, $182)], hip protector [$74 ($62, $87)] and booklet and video education interventions [$68 ($57, $80)]. A ‘cost of provision’ bias was identified, which adversely affected the valuation of the booklet and video education intervention. ----- ----- Conclusion: There may be considerable indirect and intangible costs associated with interventions to prevent falls in hospitals that can substantially affect patient preferences. These costs could substantially influence the ability of these interventions to generate a net benefit in a cost–benefit analysis.
Resumo:
Background Exercise for Health was a pragmatic, randomised, controlled trial comparing the effect of an eight-month exercise intervention on function, treatment-related side effects and quality of life following breast cancer, compared with usual care. The intervention commenced six weeks post-surgery, and two modes of delivering the same intervention was compared with usual care. The purpose of this paper is to describe the study design, along with outcomes related to recruitment, retention and representativeness, and intervention participation. Methods: Women newly diagnosed with breast cancer and residing in a major metropolitan city of Queensland, Australia, were eligible to participate. Consenting women were randomised to a face-to-face-delivered exercise group (FtF, n=67), telephone-delivered exercise group (Tel, n=67) or usual care group (UC, n=60) and were assessed pre-intervention (5-weeks post-surgery), mid-intervention (6 months post-surgery) and 10 weeks post-intervention (12 months post-surgery). Each intervention arm entailed 16 sessions with an Exercise Physiologist. Results: Of 318 potentially eligible women, 63% (n=200) agreed to participate, with a 12-month retention rate of 93%. Participants were similar to the Queensland breast cancer population with respect to disease characteristics, and the randomisation procedure was mostly successful at attaining group balance, with the few minor imbalances observed unlikely to influence intervention effects given balance in other related characteristics. Median participation was 14 (min, max: 0, 16) and 13 (min, max: 3, 16) intervention sessions for the FtF and Tel, respectively, with 68% of those in Tel and 82% in FtF participating in at least 75% of sessions. Discussion: Participation in both intervention arms during and following treatment for breast cancer was feasible and acceptable to women. Future work, designed to inform translation into practice, will evaluate the quality of life, clinical, psychosocial and behavioural outcomes associated with each mode of delivery.
Resumo:
This report summarises the action research undertaken by the Brisbane North and West Youth Connections Consortium during 2010 and facilitated by staff from QUT. The Consortium consists of a lead agency which undertakes both program coordination and direct service delivery (Brisbane Youth Service) and four other agencies across the region who undertake direct service delivery. Funds for Youth Connections are provided by the Australian Government Department of Education, Employment and Workplace Relations. This report describes and analyses the participatory action research (PAR) undertaken in 2011, including eight case studies exploring questions seen as important to the re-engagement of young people in education and training.
Resumo:
The Australian government has found that early intervention services related to youth homelessness expanded through the Reconnect program have been found to be quite effective and successful. The main things that a good early intervention practice requires are highlighted.
Resumo:
In 2008, a three-year pilot ‘pay for performance’ (P4P) program, known as ‘Clinical Practice Improvement Payment’ (CPIP) was introduced into Queensland Health (QHealth). QHealth is a large public health sector provider of acute, community, and public health services in Queensland, Australia. The organisation has recently embarked on a significant reform agenda including a review of existing funding arrangements (Duckett et al., 2008). Partly in response to this reform agenda, a casemix funding model has been implemented to reconnect health care funding with outcomes. CPIP was conceptualised as a performance-based scheme that rewarded quality with financial incentives. This is the first time such a scheme has been implemented into the public health sector in Australia with a focus on rewarding quality, and it is unique in that it has a large state-wide focus and includes 15 Districts. CPIP initially targeted five acute and community clinical areas including Mental Health, Discharge Medication, Emergency Department, Chronic Obstructive Pulmonary Disease, and Stroke. The CPIP scheme was designed around key concepts including the identification of clinical indicators that met the set criteria of: high disease burden, a well defined single diagnostic group or intervention, significant variations in clinical outcomes and/or practices, a good evidence, and clinician control and support (Ward, Daniels, Walker & Duckett, 2007). This evaluative research targeted Phase One of implementation of the CPIP scheme from January 2008 to March 2009. A formative evaluation utilising a mixed methodology and complementarity analysis was undertaken. The research involved three research questions and aimed to determine the knowledge, understanding, and attitudes of clinicians; identify improvements to the design, administration, and monitoring of CPIP; and determine the financial and economic costs of the scheme. Three key studies were undertaken to ascertain responses to the key research questions. Firstly, a survey of clinicians was undertaken to examine levels of knowledge and understanding and their attitudes to the scheme. Secondly, the study sought to apply Statistical Process Control (SPC) to the process indicators to assess if this enhanced the scheme and a third study examined a simple economic cost analysis. The CPIP Survey of clinicians elicited 192 clinician respondents. Over 70% of these respondents were supportive of the continuation of the CPIP scheme. This finding was also supported by the results of a quantitative altitude survey that identified positive attitudes in 6 of the 7 domains-including impact, awareness and understanding and clinical relevance, all being scored positive across the combined respondent group. SPC as a trending tool may play an important role in the early identification of indicator weakness for the CPIP scheme. This evaluative research study supports a previously identified need in the literature for a phased introduction of Pay for Performance (P4P) type programs. It further highlights the value of undertaking a formal risk assessment of clinician, management, and systemic levels of literacy and competency with measurement and monitoring of quality prior to a phased implementation. This phasing can then be guided by a P4P Design Variable Matrix which provides a selection of program design options such as indicator target and payment mechanisms. It became evident that a clear process is required to standardise how clinical indicators evolve over time and direct movement towards more rigorous ‘pay for performance’ targets and the development of an optimal funding model. Use of this matrix will enable the scheme to mature and build the literacy and competency of clinicians and the organisation as implementation progresses. Furthermore, the research identified that CPIP created a spotlight on clinical indicators and incentive payments of over five million from a potential ten million was secured across the five clinical areas in the first 15 months of the scheme. This indicates that quality was rewarded in the new QHealth funding model, and despite issues being identified with the payment mechanism, funding was distributed. The economic model used identified a relative low cost of reporting (under $8,000) as opposed to funds secured of over $300,000 for mental health as an example. Movement to a full cost effectiveness study of CPIP is supported. Overall the introduction of the CPIP scheme into QHealth has been a positive and effective strategy for engaging clinicians in quality and has been the catalyst for the identification and monitoring of valuable clinical process indicators. This research has highlighted that clinicians are supportive of the scheme in general; however, there are some significant risks that include the functioning of the CPIP payment mechanism. Given clinician support for the use of a pay–for-performance methodology in QHealth, the CPIP scheme has the potential to be a powerful addition to a multi-faceted suite of quality improvement initiatives within QHealth.
Resumo:
Background Up to one-third of people affected by cancer experience ongoing psychological distress and would benefit from screening followed by an appropriate level of psychological intervention. This rarely occurs in routine clinical practice due to barriers such as lack of time and experience. This study investigated the feasibility of community-based telephone helpline operators screening callers affected by cancer for their level of distress using a brief screening tool (Distress Thermometer), and triaging to the appropriate level of care using a tiered model. Methods Consecutive cancer patients and carers who contacted the helpline from September-December 2006 (n = 341) were invited to participate. Routine screening and triage was conducted by helpline operators at this time. Additional socio-demographic and psychosocial adjustment data were collected by telephone interview by research staff following the initial call. Results The Distress Thermometer had good overall accuracy in detecting general psychosocial morbidity (Hospital Anxiety and Depression Scale cut-off score ≥ 15) for cancer patients (AUC = 0.73) and carers (AUC = 0.70). We found 73% of participants met the Distress Thermometer cut-off for distress caseness according to the Hospital Anxiety and Depression Scale (a score ≥ 4), and optimal sensitivity (83%, 77%) and specificity (51%, 48%) were obtained with cut-offs of ≥ 4 and ≥ 6 in the patient and carer groups respectively. Distress was significantly associated with the Hospital Anxiety and Depression Scale scores (total, as well as anxiety and depression subscales) and level of care in cancer patients, as well as with the Hospital Anxiety and Depression Scale anxiety subscale for carers. There was a trend for more highly distressed callers to be triaged to more intensive care, with patients with distress scores ≥ 4 more likely to receive extended or specialist care. Conclusions Our data suggest that it was feasible for community-based cancer helpline operators to screen callers for distress using a brief screening tool, the Distress Thermometer, and to triage callers to an appropriate level of care using a tiered model. The Distress Thermometer is a rapid and non-invasive alternative to longer psychometric instruments, and may provide part of the solution in ensuring distressed patients and carers affected by cancer are identified and supported appropriately.
Resumo:
Older adults, especially those acutely ill, are vulnerable to developing malnutrition due to a range of risk factors. The high prevalence and extensive consequences of malnutrition in hospitalised older adults have been reported extensively. However, there are few well-designed longitudinal studies that report the independent relationship between malnutrition and clinical outcomes after adjustment for a wide range of covariates. Acutely ill older adults are exceptionally prone to nutritional decline during hospitalisation, but few reports have studied this change and impact on clinical outcomes. In the rapidly ageing Singapore population, all this evidence is lacking, and the characteristics associated with the risk of malnutrition are also not well-documented. Despite the evidence on malnutrition prevalence, it is often under-recognised and under-treated. It is therefore crucial that validated nutrition screening and assessment tools are used for early identification of malnutrition. Although many nutrition screening and assessment tools are available, there is no universally accepted method for defining malnutrition risk and nutritional status. Most existing tools have been validated amongst Caucasians using various approaches, but they are rarely reported in the Asian elderly and none has been validated in Singapore. Due to the multiethnicity, cultural, and language differences in Singapore older adults, the results from non-Asian validation studies may not be applicable. Therefore it is important to identify validated population and setting specific nutrition screening and assessment methods to accurately detect and diagnose malnutrition in Singapore. The aims of this study are therefore to: i) characterise hospitalised elderly in a Singapore acute hospital; ii) describe the extent and impact of admission malnutrition; iii) identify and evaluate suitable methods for nutritional screening and assessment; and iv) examine changes in nutritional status during admission and their impact on clinical outcomes. A total of 281 participants, with a mean (+SD) age of 81.3 (+7.6) years, were recruited from three geriatric wards in Tan Tock Seng Hospital over a period of eight months. They were predominantly Chinese (83%) and community-dwellers (97%). They were screened within 72 hours of admission by a single dietetic technician using four nutrition screening tools [Tan Tock Seng Hospital Nutrition Screening Tool (TTSH NST), Nutritional Risk Screening 2002 (NRS 2002), Mini Nutritional Assessment-Short Form (MNA-SF), and Short Nutritional Assessment Questionnaire (SNAQ©)] that were administered in no particular order. The total scores were not computed during the screening process so that the dietetic technician was blinded to the results of all the tools. Nutritional status was assessed by a single dietitian, who was blinded to the screening results, using four malnutrition assessment methods [Subjective Global Assessment (SGA), Mini Nutritional Assessment (MNA), body mass index (BMI), and corrected arm muscle area (CAMA)]. The SGA rating was completed prior to computation of the total MNA score to minimise bias. Participants were reassessed for weight, arm anthropometry (mid-arm circumference, triceps skinfold thickness), and SGA rating at discharge from the ward. The nutritional assessment tools and indices were validated against clinical outcomes (length of stay (LOS) >11days, discharge to higher level care, 3-month readmission, 6-month mortality, and 6-month Modified Barthel Index) using multivariate logistic regression. The covariates included age, gender, race, dementia (defined using DSM IV criteria), depression (defined using a single question “Do you often feel sad or depressed?”), severity of illness (defined using a modified version of the Severity of Illness Index), comorbidities (defined using Charlson Comorbidity Index, number of prescribed drugs and admission functional status (measured using Modified Barthel Index; MBI). The nutrition screening tools were validated against the SGA, which was found to be the most appropriate nutritional assessment tool from this study (refer section 5.6) Prevalence of malnutrition on admission was 35% (defined by SGA), and it was significantly associated with characteristics such as swallowing impairment (malnourished vs well-nourished: 20% vs 5%), poor appetite (77% vs 24%), dementia (44% vs 28%), depression (34% vs 22%), and poor functional status (MBI 48.3+29.8 vs 65.1+25.4). The SGA had the highest completion rate (100%) and was predictive of the highest number of clinical outcomes: LOS >11days (OR 2.11, 95% CI [1.17- 3.83]), 3-month readmission (OR 1.90, 95% CI [1.05-3.42]) and 6-month mortality (OR 3.04, 95% CI [1.28-7.18]), independent of a comprehensive range of covariates including functional status, disease severity and cognitive function. SGA is therefore the most appropriate nutritional assessment tool for defining malnutrition. The TTSH NST was identified as the most suitable nutritional screening tool with the best diagnostic performance against the SGA (AUC 0.865, sensitivity 84%, specificity 79%). Overall, 44% of participants experienced weight loss during hospitalisation, and 27% had weight loss >1% per week over median LOS 9 days (range 2-50). Wellnourished (45%) and malnourished (43%) participants were equally prone to experiencing decline in nutritional status (defined by weight loss >1% per week). Those with reduced nutritional status were more likely to be discharged to higher level care (adjusted OR 2.46, 95% CI [1.27-4.70]). This study is the first to characterise malnourished hospitalised older adults in Singapore. It is also one of the very few studies to (a) evaluate the association of admission malnutrition with clinical outcomes in a multivariate model; (b) determine the change in their nutritional status during admission; and (c) evaluate the validity of nutritional screening and assessment tools amongst hospitalised older adults in an Asian population. Results clearly highlight that admission malnutrition and deterioration in nutritional status are prevalent and are associated with adverse clinical outcomes in hospitalised older adults. With older adults being vulnerable to risks and consequences of malnutrition, it is important that they are systematically screened so timely and appropriate intervention can be provided. The findings highlighted in this thesis provide an evidence base for, and confirm the validity of the current nutrition screening and assessment tools used among hospitalised older adults in Singapore. As the older adults may have developed malnutrition prior to hospital admission, or experienced clinically significant weight loss of >1% per week of hospitalisation, screening of the elderly should be initiated in the community and continuous nutritional monitoring should extend beyond hospitalisation.