45 resultados para 35.72

em Queensland University of Technology - ePrints Archive


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a method for investigating ship emissions, the plume capture and analysis system (PCAS), and its application in measuring airborne pollutant emission factors (EFs) and particle size distributions. The current investigation was conducted in situ, aboard two dredgers (Amity: a cutter suction dredger and Brisbane: a hopper suction dredger) but the PCAS is also capable of performing such measurements remotely at a distant point within the plume. EFs were measured relative to the fuel consumption using the fuel combustion derived plume CO2. All plume measurements were corrected by subtracting background concentrations sampled regularly from upwind of the stacks. Each measurement typically took 6 minutes to complete and during one day, 40 to 50 measurements were possible. The relationship between the EFs and plume sample dilution was examined to determine the plume dilution range over which the technique could deliver consistent results when measuring EFs for particle number (PN), NOx, SO2, and PM2.5 within a targeted dilution factor range of 50-1000 suitable for remote sampling. The EFs for NOx, SO2, and PM2.5 were found to be independent of dilution, for dilution factors within that range. The EF measurement for PN was corrected for coagulation losses by applying a time dependant particle loss correction to the particle number concentration data. For the Amity, the EF ranges were PN: 2.2 - 9.6 × 1015 (kg-fuel)-1; NOx: 35-72 g(NO2).(kg-fuel)-1, SO2 0.6 - 1.1 g(SO2).(kg-fuel)-1and PM2.5: 0.7 – 6.1 g(PM2.5).(kg-fuel)-1. For the Brisbane they were PN: 1.0 – 1.5 x 1016 (kg-fuel)-1, NOx: 3.4 – 8.0 g(NO2).(kg-fuel)-1, SO2: 1.3 – 1.7 g(SO2).(kg-fuel)-1 and PM2.5: 1.2 – 5.6 g(PM2.5).(kg-fuel)-1. The results are discussed in terms of the operating conditions of the vessels’ engines. Particle number emission factors as a function of size as well as the count median diameter (CMD), and geometric standard deviation of the size distributions are provided. The size distributions were found to be consistently uni-modal in the range below 500 nm, and this mode was within the accumulation mode range for both vessels. The representative CMDs for the various activities performed by the dredgers ranged from 94-131 nm in the case of the Amity, and 58-80 nm for the Brisbane. A strong inverse relationship between CMD and EF(PN) was observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of bovine interphotoreceptor matrix and conditioned medium from human Y-79 retinoblastoma cells by gelatin SDS-PAGE zymography reveals abundant activity of a 72-kDa M(r) gelatinase. The 72-kDa gelatinase from either source is inhibited by EDTA but not aprotinin or NEM, indicating that it is a metalloproteinase (MMP). The 72-kDa MMP is converted to a 62-kDa species with APMA treatment after gelatin sepharose affinity purification typical of previously described gelatinase MMP-2. The latent 72-kDa gelatinase from either bovine IPM or Y-79 media autoactivates without APMA in the presence of calcium and zinc after 72 hr at 37°C, producing a fully active mixture of proteinase species, 50 (48 in Y-79 medium), 38 and 35 kDa in size. The presence of inhibitory activity was examined in both whole bovine IPM and IPM fractions separated by SDS-PAGE. Whole IPM inhibited gelatinolytic activity of autoactivated Y-79-derived MMP in a dose-dependent manner. Inhibitory activities are observed in two protein fractions of 27-42 and 20-25 kDa. Western blots using antibodies to human tissue inhibitor of metalloproteinase 1 and 2 (TIMP-1 and -2) reveal the presence of two TIMP-1-like proteins at 21 and 29 kDa in inhibitory fractions of the bovine IPM. TIMP-2 was not detected in the inhibitory IPM fractions, consistent with the observed autoactivation of bovine IPM 72-kDa gelatinase. Potential roles for this IPM MMP-TIMP system include physiologic remodelling of the neural retina-RPE cell interface and digestion of shed rod outer segment as well as pathological processes such as retinal detachment, PE cell migration, neovascularization and tumor progression. Cultured Y-79 cells appear to be a good model for studying the production and regulation of this proteinase system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose The purpose of this study is to examine the prevalence, sociodemographic and clinical predictors, and physical and psychosocial correlates of unmet needs among women 3–5 years following treatment for endometrial cancer. Methods Women with endometrial cancer completed a survey around the time of diagnosis and again 3–5 years later. The follow-up survey asked women about their physical and psychosocial functioning and supportive care needs (CaSUN). Multivariable-adjusted logistic regression identified the predictors and correlates of women’s unmet needs 3–5 years after diagnosis. Results Of the 629 women who completed the cancer survivors’ unmet needs measure (CaSUN), 24 % (n = 153) women reported one or more unmet supportive care needs in the last month. Unmet needs at 3–5 years post-diagnosis were predicted by younger age (OR = 4.47; 95 % CI: 2.09–9.56) and advanced disease stage at diagnosis (OR = 2.47; 95 % CI: 1.38–4.45) and correlated with greater cancer symptoms (OR = 1.78; 95 % CI: 1.05–3.02), lower limb swelling (OR = 2.50; 95 % CI: 1.51–4.15), symptoms of anxiety (OR = 2.21; 95 % CI: 1.31–3.72), and less availability of social support (OR = 3.42; 95 % CI: 1.92–6.11). Women with a history of comorbidities (OR = 0.47; 95 % CI: 0.27–0.82) and those living in a rural area at the time of diagnosis (OR = 0.56; 95 % CI: 0.34–0.92) were less likely to report unmet needs. Conclusions Sociodemographic, health, and psychosocial factors seem important for identifying women who will or will not have unmet needs several years following endometrial cancer. Longitudinal assessments of people’s needs over the course of their cancer trajectory may be an effective way to identify areas that should receive further attention by health providers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective-To establish the demographic, health status and insurance determinants of pre-hospital ambulance non-usage for patients with emergency medical needs. Methods-Triage category, date of birth, sex, marital status, country of origin, method and time of arrival, ambulance insurance status, diagnosis, and disposal were collected for all patients who presented over a four month period (n=10 229) to the emergency department of a major provincial hospital. Data for patients with urgent (n=678) or critical care needs (n=332) who did not use pre-hospital care were analysed using Poisson regression. Results-Only a small percentage (6.6%) of the total sample were triaged as having urgent medical needs or critical care needs (3.2%). Predictors of usage for those with urgent care needs included age greater than 65 years (prevalence ratio (PR)=0.54; 95% confidence interval (CI)= 0.35 to 0.83), being admitted to intensive care or transferred to another hospital (PR=0.62; 95% CI=0.44 to 0.89) or ward (PR=0.72; 95% CI=0.56 to 0.93) and ambulance insurance status (PR=0.67; 95% CI=052 to 0.86). Sex, marital status, time of day and country of origin were not predictive of usage and non-usage. Predictors of usage for those with critical care needs included age 65 years or greater (PR=0.45; 95% CI=0.25 to 0.81) and a diagnosis of trauma (PR=0.49; 95% CI=0.26 to 0.92). A non-English speaking background was predictive of non-usage (PR=1.98; 95% CI=1.06 to 3.70). Sex, marital status, time of day, triage and ambulance insurance status were not predictive of non-usage. Conclusions-Socioeconomic and medical factors variously influence ambulance usage depending on the severity or urgency of the medical condition. Ambulance insurance status was less of an influence as severity of condition increased suggesting that, at a critical level of urgency, patients without insurance are willing to pay for a pre-hospital ambulance service.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examined differences in response latencies obtained during a validated video-based hazard perception driving test between three healthy, community-dwelling groups: 22 mid-aged (35-55 years), 34 young-old (65-74 years), and 23 old-old (75-84 years) current drivers, matched for gender, education level, and vocabulary. We found no significant difference in performance between mid-aged and young-old groups, but the old-old group was significantly slower than the other two groups. The differences between the old-old group and the other groups combined were independently mediated by useful field of view (UFOV), contrast sensitivity, and simple reaction time measures. Given that hazard perception latency has been linked with increased crash risk, these results are consistent with the idea that increased crash risk in older adults could be a function of poorer hazard perception, though this decline does not appear to manifest until age 75+ in healthy drivers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Section 35 of the Insurance Contracts Act 1984 requires insurers offering insurance policies in six prescribed areas "to clearly inform" prospective insureds of any departure their policies may constitute from the standard covers established by the Act and its accompanying Regulations. This prescribed insurance contracts regime was designed to remedy comprehension problems generated by the length and complexity of insurance documents and to alleviate misunderstanding over the terms and conditions of individual policies. This article examines the rationale underpinning s 35 and the prescribed insurance contracts regime and looks at the operation of the legislation with particular reference to home contents insurance in Australia. It is argued that the means whereby disclosure of derogation from standard cover may be effected largely negates the thrust of the prescribed insurance contract reform. Recommendations to address these operational deficiencies are made.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Discharge planning has become increasingly important, with current trends toward shorter hospital stays, increased health care costs, and more community-based health services. Effective discharge planning ensures the safety and ongoing care for patients,1 and it also benefits health care providers and organizations. It results in shorter hospital stays, fewer readmissions, higher access rates to post-hospitalization services, greater patient satisfaction with the discharge, and improved quality of life and continuity of care.[2] and [3] All acute care patients and their caregivers require some degree of preparation for discharge home—education about their health status, risks, and treatment; help setting health goals and maintaining a good level of self-care; information about community resources; and follow-up appointments and referrals to appropriate community health providers. Inadequate preparation exposes the patient to unnecessary risks of recurrence or complications of the acute complaint, neglect of nonacute comorbidities, mismanagement and side effects of medication, disruption of family and social life, emotional distress, and financial loss.[2], [3] and [4] The result may be re-presentation to the emergency department. It is noteworthy that up to 18% of ED presentations are revisits within 72 hours of the original visit5; many of these are considered preventable.6 It is a primary responsibility of nurses to ensure that patients return to the community adequately prepared and with appropriate support in place. Up to 65% of ED patients are discharged home from the emergency department,7 and the characteristics of the emergency department and its patient population make the provision of a high standard of discharge planning uniquely difficult. In addition, discharge planning is neglected in contemporary emergency nursing—there are no monographs devoted to the subject, and there is little published research. In this article 3 issues are explored: the importance of emergency nurses’ participation in the discharge-planning process, impediments to their participation; and strategies to improve discharge planning in the emergency department.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lifecycle funds offered by retirement plan providers allocate aggressively to risky asset classes when the employee participants are young, gradually switching to more conservative asset classes as they grow older and approach retirement. This approach focuses on maximizing growth of the accumulation fund in the initial years and preserving its value in the later years. The authors simulate terminal wealth outcomes based on conventional lifecycle asset allocation rules as well as on contrarian strategies that reverse the direction of asset switching. The evidence suggests that the growth in portfolio size over time significantly impacts the asset allocation decision. Due to the portfolio size effect that is observed by the authors, the terminal value of accumulation in retirement accounts is influenced more by the asset allocation strategy adopted in later years relative to that adopted in early years. By mechanistically switching to conservative assets in the later years of a plan, lifecycle strategies sacrifice significant growth opportunity and prove counterproductive to the participant's wealth accumulation objective. The authors' conclude that this sacrifice does not seem to be compensated adequately in terms of reducing the risk of potentially adverse outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this case-control study of 617 children was to investigate early childhood caries (ECC) risk indicators in a non-fluoridated region in Australia. ECC cases were recruited from childcare facilities, public hospitals and private specialist clinics to source children from different socioeconomic backgrounds. Non-ECC controls were recruited from the same childcare facilities. A multinomial logistic modelling approach was used for statistical analysis. The results showed that a large percentage of children tested positive for Streptococcus mutans if their mothers also tested positive. A common risk indicator found in ECC children from childcare facilities and public hospitals was visible plaque (OR 4.1, 95% CI 1.0-15.9, and OR 8.7, 95% CI 2.3-32.9, respectively). Compared to ECC-free controls, the risk indicators specific to childcare cases were enamel hypoplasia (OR 4.2, 95% CI 1.0-18.3), difficulty in cleaning child's teeth (OR 6.6, 95% CI 2.2-19.8), presence of S. mutans (OR 4.8, 95% CI 0.7-32.6), sweetened drinks (OR 4.0, 95% CI 1.2-13.6) and maternal anxiety (OR 5.1, 95% CI 1.1-25.0). Risk indicators specific to public hospital cases were S. mutans presence in child (OR 7.7, 95% CI 1.3-44.6) or mother (OR 8.1, 95% CI 0.9-72.4), ethnicity (OR 5.6, 95% CI 1.4-22.1), and access of mother to pension or health care card (OR 20.5, 95% CI 3.5-119.9). By contrast, a history of chronic ear infections was found to be protective for ECC in childcare children (OR 0.28, 95% CI 0.09-0.82). The biological, socioeconomic and maternal risk indicators demonstrated in the present study can be employed in models of ECC that can be usefully applied for future longitudinal studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A hip fracture causes permanent changes to life style for older people. Further, two important mortality indicators found post operatively for this group include, the time until surgery after fracture, and pre-operative health status prior to surgery, yet no research is available investigating relationships between time to surgery and health status. The researchers aimed to establish the health status risks for patients aged over 65 years with a non-pathological hip fracture to guide nursing care interventions. A prospective cohort design was used to investigate relationships between time to surgery and measures on pre-operative health status indicators including, skin integrity risk, vigor, mental state, bowel function and continence. Twenty-nine patients with a mean age in years of 81.93 (SD,9.49), were recruited. The mean number of hours from time 1 assessment to surgery was 52.72 (SD,58.35) and the range was 1 hour to 219 hours. At Time 2, the mean scores of vigor and skin integrity risk were significantly higher, indicating poorer health status. A change in health status occurred but possibly due to the small sample size it was difficult to relate this result to time. However the results informed preoperative care prior to surgery, for this group.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cutaneous cholecalciferol synthesis has not been considered in making recommendations for vitamin D intake. Our objective was to model the effects of sun exposure, vitamin D intake, and skin reflectance (pigmentation) on serum 25-hydroxyvitamin D (25[OH]D) in young adults with a wide range of skin reflectance and sun exposure. Four cohorts of participants (n = 72 total) were studied for 7-8 wk in the fall, winter, spring, and summer in Davis, CA [38.5° N, 121.7° W, Elev. 49 ft (15 m)]. Skin reflectance was measured using a spectrophotometer, vitamin D intake using food records, and sun exposure using polysulfone dosimeter badges. A multiple regression model (R^sup 2^ = 0.55; P < 0.0001) was developed and used to predict the serum 25(OH)D concentration for participants with low [median for African ancestry (AA)] and high [median for European ancestry (EA)] skin reflectance and with low [20th percentile, ~20 min/d, ~18% body surface area (BSA) exposed] and high (80th percentile, ~90 min/d, ~35% BSA exposed) sun exposure, assuming an intake of 200 IU/d (5 ug/d). Predicted serum 25(OH)D concentrations for AA individuals with low and high sun exposure in the winter were 24 and 42 nmol/L and in the summer were 40 and 60 nmol/L. Corresponding values for EA individuals were 35 and 60 nmol/L in the winter and in the summer were 58 and 85 nmol/L. To achieve 25(OH)D ≥75 nmol/L, we estimate that EA individuals with high sun exposure need 1300 IU/d vitamin D intake in the winter and AA individuals with low sun exposure need 2100-3100 IU/d year-round.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Significance Venous leg ulcers are a significant cause of chronic ill-health for 1–3% of those aged over 60 years, increasing in incidence with age. The condition is difficult and costly to heal, consuming 1–2.5% of total health budgets in developed countries and up to 50% of community nursing time. Unfortunately after healing, there is a recurrence rate of 60 to 70%, frequently within the first 12 months after heaing. Although some risk factors associated with higher recurrence rates have been identified (e.g. prolonged ulcer duration, deep vein thrombosis), in general there is limited evidence on treatments to effectively prevent recurrence. Patients are generally advised to undertake activities which aim to improve the impaired venous return (e.g. compression therapy, leg elevation, exercise). However, only compression therapy has some evidence to support its effectiveness in prevention and problems with adherence to this strategy are well documented. Aim The aim of this research was to identify factors associated with recurrence by determining relationships between recurrence and demographic factors, health, physical activity, psychosocial factors and self-care activities to prevent recurrence. Methods Two studies were undertaken: a retrospective study of participants diagnosed with a venous leg ulcer which healed 12 to 36 months prior to the study (n=122); and a prospective longitudinal study of participants recruited as their ulcer healed and data collected for 12 months following healing (n=80). Data were collected from medical records on demographics, medical history and ulcer history and treatments; and from self-report questionnaires on physical activity, nutrition, psychosocial measures, ulcer history, compression and other self-care activities. Follow-up data for the prospective study were collected every three months for 12 months after healing. For the retrospective study, a logistic regression model determined the independent influences of variables on recurrence. For the prospective study, median time to recurrence was calculated using the Kaplan-Meier method and a Cox proportional-hazards regression model was used to adjust for potential confounders and determine effects of preventive strategies and psychosocial factors on recurrence. Results In total, 68% of participants in the retrospective study and 44% of participants in the prospective study suffered a recurrence. After mutual adjustment for all variables in multivariable regression models, leg elevation, compression therapy, self efficacy and physical activity were found to be consistently related to recurrence in both studies. In the retrospective study, leg elevation, wearing Class 2 or 3 compression hosiery, the level of physical activity, cardiac disease and self efficacy scores remained significantly associated (p<0.05) with recurrence. The model was significant (p <0.001); with a R2 equivalent of 0.62. Examination of relationships between psychosocial factors and adherence to wearing compression hosiery found wearing compression hosiery was significantly positively associated with participants’ knowledge of the cause of their condition (p=0.002), higher self-efficacy scores (p=0.026) and lower depression scores (p=0.009). Analysis of data from the prospective study found there were 35 recurrences (44%) in the 12 months following healing and median time to recurrence was 27 weeks. After adjustment for potential confounders, a Cox proportional hazards regression model found that at least an hour/day of leg elevation, six or more days/week in Class 2 (20–25mmHg) or 3 (30–40mmHg) compression hosiery, higher social support scale scores and higher General Self-Efficacy scores remained significantly associated (p<0.05) with a lower risk of recurrence, while male gender and a history of DVT remained significant risk factors for recurrence. Overall the model was significant (p <0.001); with an R2 equivalent 0.72. Conclusions The high rates of recurrence found in the studies highlight the urgent need for further information in this area to support development of effective strategies for prevention. Overall, results indicate leg elevation, physical activity, compression hosiery and strategies to improve self-efficacy are likely to prevent recurrence. In addition, optimal management of depression and strategies to improve patient knowledge and self-efficacy may positively influence adherence to compression therapy. This research provides important information for development of strategies to prevent recurrence of venous leg ulcers, with the potential to improve health and decrease health care costs in this population.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Older adults, especially those acutely ill, are vulnerable to developing malnutrition due to a range of risk factors. The high prevalence and extensive consequences of malnutrition in hospitalised older adults have been reported extensively. However, there are few well-designed longitudinal studies that report the independent relationship between malnutrition and clinical outcomes after adjustment for a wide range of covariates. Acutely ill older adults are exceptionally prone to nutritional decline during hospitalisation, but few reports have studied this change and impact on clinical outcomes. In the rapidly ageing Singapore population, all this evidence is lacking, and the characteristics associated with the risk of malnutrition are also not well-documented. Despite the evidence on malnutrition prevalence, it is often under-recognised and under-treated. It is therefore crucial that validated nutrition screening and assessment tools are used for early identification of malnutrition. Although many nutrition screening and assessment tools are available, there is no universally accepted method for defining malnutrition risk and nutritional status. Most existing tools have been validated amongst Caucasians using various approaches, but they are rarely reported in the Asian elderly and none has been validated in Singapore. Due to the multiethnicity, cultural, and language differences in Singapore older adults, the results from non-Asian validation studies may not be applicable. Therefore it is important to identify validated population and setting specific nutrition screening and assessment methods to accurately detect and diagnose malnutrition in Singapore. The aims of this study are therefore to: i) characterise hospitalised elderly in a Singapore acute hospital; ii) describe the extent and impact of admission malnutrition; iii) identify and evaluate suitable methods for nutritional screening and assessment; and iv) examine changes in nutritional status during admission and their impact on clinical outcomes. A total of 281 participants, with a mean (+SD) age of 81.3 (+7.6) years, were recruited from three geriatric wards in Tan Tock Seng Hospital over a period of eight months. They were predominantly Chinese (83%) and community-dwellers (97%). They were screened within 72 hours of admission by a single dietetic technician using four nutrition screening tools [Tan Tock Seng Hospital Nutrition Screening Tool (TTSH NST), Nutritional Risk Screening 2002 (NRS 2002), Mini Nutritional Assessment-Short Form (MNA-SF), and Short Nutritional Assessment Questionnaire (SNAQ©)] that were administered in no particular order. The total scores were not computed during the screening process so that the dietetic technician was blinded to the results of all the tools. Nutritional status was assessed by a single dietitian, who was blinded to the screening results, using four malnutrition assessment methods [Subjective Global Assessment (SGA), Mini Nutritional Assessment (MNA), body mass index (BMI), and corrected arm muscle area (CAMA)]. The SGA rating was completed prior to computation of the total MNA score to minimise bias. Participants were reassessed for weight, arm anthropometry (mid-arm circumference, triceps skinfold thickness), and SGA rating at discharge from the ward. The nutritional assessment tools and indices were validated against clinical outcomes (length of stay (LOS) >11days, discharge to higher level care, 3-month readmission, 6-month mortality, and 6-month Modified Barthel Index) using multivariate logistic regression. The covariates included age, gender, race, dementia (defined using DSM IV criteria), depression (defined using a single question “Do you often feel sad or depressed?”), severity of illness (defined using a modified version of the Severity of Illness Index), comorbidities (defined using Charlson Comorbidity Index, number of prescribed drugs and admission functional status (measured using Modified Barthel Index; MBI). The nutrition screening tools were validated against the SGA, which was found to be the most appropriate nutritional assessment tool from this study (refer section 5.6) Prevalence of malnutrition on admission was 35% (defined by SGA), and it was significantly associated with characteristics such as swallowing impairment (malnourished vs well-nourished: 20% vs 5%), poor appetite (77% vs 24%), dementia (44% vs 28%), depression (34% vs 22%), and poor functional status (MBI 48.3+29.8 vs 65.1+25.4). The SGA had the highest completion rate (100%) and was predictive of the highest number of clinical outcomes: LOS >11days (OR 2.11, 95% CI [1.17- 3.83]), 3-month readmission (OR 1.90, 95% CI [1.05-3.42]) and 6-month mortality (OR 3.04, 95% CI [1.28-7.18]), independent of a comprehensive range of covariates including functional status, disease severity and cognitive function. SGA is therefore the most appropriate nutritional assessment tool for defining malnutrition. The TTSH NST was identified as the most suitable nutritional screening tool with the best diagnostic performance against the SGA (AUC 0.865, sensitivity 84%, specificity 79%). Overall, 44% of participants experienced weight loss during hospitalisation, and 27% had weight loss >1% per week over median LOS 9 days (range 2-50). Wellnourished (45%) and malnourished (43%) participants were equally prone to experiencing decline in nutritional status (defined by weight loss >1% per week). Those with reduced nutritional status were more likely to be discharged to higher level care (adjusted OR 2.46, 95% CI [1.27-4.70]). This study is the first to characterise malnourished hospitalised older adults in Singapore. It is also one of the very few studies to (a) evaluate the association of admission malnutrition with clinical outcomes in a multivariate model; (b) determine the change in their nutritional status during admission; and (c) evaluate the validity of nutritional screening and assessment tools amongst hospitalised older adults in an Asian population. Results clearly highlight that admission malnutrition and deterioration in nutritional status are prevalent and are associated with adverse clinical outcomes in hospitalised older adults. With older adults being vulnerable to risks and consequences of malnutrition, it is important that they are systematically screened so timely and appropriate intervention can be provided. The findings highlighted in this thesis provide an evidence base for, and confirm the validity of the current nutrition screening and assessment tools used among hospitalised older adults in Singapore. As the older adults may have developed malnutrition prior to hospital admission, or experienced clinically significant weight loss of >1% per week of hospitalisation, screening of the elderly should be initiated in the community and continuous nutritional monitoring should extend beyond hospitalisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Distal-to-proximal technique has been recommended for anti-cancer therapy administration. There is no evidence to suggest that a 24-hour delay of treatment is necessary for patients with a previous uncomplicated venous puncture proximal to the administration site. Objectives: This study aims to identify if the practice of 24-hour delay between a venous puncture and subsequent cannulation for anti-cancer therapies at a distal site is necessary for preventing extravasation. Methods: A prospective cohort study was conducted with 72 outpatients receiving anti-cancer therapy via an administration site distal to at least one previous uncomplicated venous puncture on the same arm in a tertiary cancer centre in Australia. Participants were interviewed and assessed at baseline data before treatment and on day 7 for incidence of extravasation/phlebitis. Results: Of 72 participants with 99 occasions of treatment, there was one incident of infiltration (possible extravasation) at the venous puncture site proximal to the administration site and two incidents of phlebitis at the administration site. Conclusions: A 24 hour delay is unnecessary if an alternative vein can be accessed for anti-cancer therapy after a proximal venous puncture. Implications for practice: Extravasation can occur at a venous puncture site proximal to an administration site in the same vein. However, the nurse can administer anti-cancer therapy at a distal site if the nurse can confidently determine the vein of choice is not in any way connected to the previous puncture site through visual inspection and palpation.