48 resultados para Ambulatory
em Queensland University of Technology - ePrints Archive
Resumo:
OBJECTIVE: To evaluate the scored Patient-generated Subjective Global Assessment (PG-SGA) tool as an outcome measure in clinical nutrition practice and determine its association with quality of life (QoL). DESIGN: A prospective 4 week study assessing the nutritional status and QoL of ambulatory patients receiving radiation therapy to the head, neck, rectal or abdominal area. SETTING: Australian radiation oncology facilities. SUBJECTS: Sixty cancer patients aged 24-85 y. INTERVENTION: Scored PG-SGA questionnaire, subjective global assessment (SGA), QoL (EORTC QLQ-C30 version 3). RESULTS: According to SGA, 65.0% (39) of subjects were well-nourished, 28.3% (17) moderately or suspected of being malnourished and 6.7% (4) severely malnourished. PG-SGA score and global QoL were correlated (r=-0.66, P<0.001) at baseline. There was a decrease in nutritional status according to PG-SGA score (P<0.001) and SGA (P<0.001); and a decrease in global QoL (P<0.001) after 4 weeks of radiotherapy. There was a linear trend for change in PG-SGA score (P<0.001) and change in global QoL (P=0.003) between those patients who improved (5%) maintained (56.7%) or deteriorated (33.3%) in nutritional status according to SGA. There was a correlation between change in PG-SGA score and change in QoL after 4 weeks of radiotherapy (r=-0.55, P<0.001). Regression analysis determined that 26% of the variation of change in QoL was explained by change in PG-SGA (P=0.001). CONCLUSION: The scored PG-SGA is a nutrition assessment tool that identifies malnutrition in ambulatory oncology patients receiving radiotherapy and can be used to predict the magnitude of change in QoL.
Resumo:
Vitamin D deficiency and insufficiency are now seen as a contemporary health problem in Australia with possible widespread health effects not limited to bone health1. Despite this, the Vitamin D status (measured as serum 25-hydroxyvitamin D (25(OH)D)) of ambulatory adults has been overlooked in this country. Serum 25(OH)D status is especially important among this group as studies have shown a link between Vitamin D and fall risk in older adults2. Limited data also exists on the contributions of sun exposure via ultraviolet radiation and dietary intake to serum 25(OH)D status in this population. The aims of this project were to assess the serum 25(OH)D status of a group of older ambulatory adults in South East Queensland, to assess the association between their serum 25(OH)D status and functional measures as possible indicators of fall risk, obtain data on the sources of Vitamin D in this population and assess whether this intake was related to serum 25(OH)D status and describe sun protection and exposure behaviors in this group and investigate whether a relationship existed between these and serum 25(OH)D status. The collection of this data assists in addressing key gaps identified in the literature with regard to this population group and their Vitamin D status in Australia. A representative convenience sample of participants (N=47) over 55 years of age was recruited for this cross-sectional, exploratory study which was undertaken in December 2007 in south-east Queensland (Brisbane and Sunshine coast). Participants were required to complete a sun exposure questionnaire in addition to a Calcium and Vitamin D food frequency questionnaire. Timed up and go and handgrip dynamometry tests were used to examine functional capacity. Serum 25(OH)D status and blood measures of Calcium, Phosphorus and Albumin were determined through blood tests. The Mean and Median serum 25-Hydroxyvitamin D (25(OH)D) for all participants in this study was 85.8nmol/L (Standard Deviation 29.7nmol/L) and 81.0nmol/L (Range 22-158nmol/L), respectively. Analysis at the bivariate level revealed a statistically significant relationship between serum 25(OH)D status and location, with participants living on the Sunshine Coast having a mean serum 25(OH)D status 21.3nmol/L higher than participants living in Brisbane (p=0.014). While at the descriptive level there was an apparent trend towards higher outdoor exposure and increasing levels of serum 25(OH)D, no statistically significant associations between the sun measures of outdoor exposure, sun protection behaviors and phenotypic characteristics and serum 25(OH)D status were observed. Intake of both Calcium and Vitamin D was low in this sample with sixty-eight (68%) of participants not meeting the Estimated Average Requirements (EAR) for Calcium (Median=771.0mg; Range=218.0-2616.0mg), while eighty-seven (87%) did not meet the Adequate Intake for Vitamin D (Median=4.46ug; Range=0.13-30.0ug). This raises the question of how realistic meeting the new Adequate Intakes for Vitamin D is, when there is such a low level of Vitamin D fortification in this country. However, participants meeting the Adequate Intake (AI) for Vitamin D were observed to have a significantly higher serum 25(OH)D status compared to those not meeting the AI for Vitamin D (p=0.036), showing that meeting the AI for Vitamin D may play a significant role in determining Vitamin D status in this population. By stratifying our data by categories of outdoor exposure time, a trend was observed between increased importance of Vitamin D dietary intake as a possible determinant of serum 25(OH)D status in participants with lower outdoor exposures. While a trend towards higher Timed Up and Go scores in participants with higher 25(OH) D status was seen, this was only significant for females (p=0.014). Handgrip strength showed statistically significant association with serum 25(OH)D status. The high serum 25(OH)D status in our sample almost certainly explains the limited relationship between functional measures and serum 25(OH)D. However, the observation of an association between slower Time Up and Go speeds, and lower serum 25(OH)D levels, even with a small sample size, is significant as slower Timed Up and Go speeds have been associated with increased fall risk in older adults3. Multivariable regression analysis revealed Location as the only significant determinant of serum 25(OH)D status at p=0.014, with trends (p=>0.1) for higher serum 25(OH)D being shown for participants that met the AI for Vitamin D and rated themselves as having a higher health status. The results of this exploratory study show that 93.6% of participants had adequate 25(OH)D status-possibly due to measurement being taken in the summer season and the convenience nature of the sample. However, many participants do not meet their dietary Calcium and Vitamin D requirements, which may indicate inadequate intake of these nutrients in older Australians and a higher risk of osteoporosis. The relationship between serum 25(OH)D and functional measures in this population also requires further study, especially in older adults displaying Vitamin D insufficiency or deficiency.
Resumo:
Oral intake of ascorbic acid is essential for optimum health in human beings. Continuous ambulatory peritoneal dialysis (CAPD) patients have an increased need for ascorbic acid, because of increased loss through dialysate, reduced intake owing to nausea and loss of appetite, and increased oxidative stress. However, optimum intake is still controversial. We studied 50 clinically stable patients to determine the relationship between oral ascorbic acid intake and serum ascorbic acid (SAA) level. Total oral intake ranged from 28 mg daily to 412 mg daily. Only one patient had an oral intake of ascorbic acid below 60 mg per day. The SAA levels ranged from 1 mg/L to 36.17 mg/L. Although a strong correlation existed between intake and SAA (p < 0.001, R2 = 0.47), the variation in SAA at any given intake level was wide. Of the studied patients, 62% had an SAA < 8.7 mg/L, 40% had an SAA < 5.1 mg/L (below the level in a healthy population), and 12% had a level below 2 mg/L (scorbutic). None of the patients demonstrated clinical manifestations of scurvy. Our results show that, in CAPD patients, ascorbic acid deficiency can be reliably detected only with SAA measurements, and oral intake may influence SAA level. To maintain ascorbic acid in the normal range for healthy adults, daily oral intake needs to be increased above the U.S. recommended dietary allowance to 80-140 mg.
Resumo:
Background Diabetic foot complications are recognised as the most common reason for diabetic related hospitalisation and lower extremity amputations. Multi-faceted strategies to reduce diabetic foot hospitalisation and amputation rates have been successful. However, most diabetic foot ulcers are managed in ambulatory settings where data availability is poor and studies limited. The project aimed to develop and evaluate strategies to improve the management of diabetic foot complications in three diverse ambulatory settings and measure the subsequent impact on ospitalisation and amputation. Methods Multifaceted strategies were implemented in 2008, including: multi-disciplinary teams, clinical pathways and training, clinical indicators, telehealth support and surveys. A retrospective audit of consecutive patient records from July 2006 – June 2007 determined baseline clinical indicators (n = 101). A clinical pathway teleform was implemented as a clinical record and clinical indicator analyser in all sites in 2008 (n = 327) and followed up in 2009 (n = 406). Results Prior to the intervention, clinical pathways were not used and multi-disciplinary teams were limited. There was an absolute improvement in treating according to risk of 15% in 2009 and surveillance of the high risk population of 34% and 19% in 2008 and 2009 respectively (p < 0.001). Improvements of 13 – 66% (p < 0.001) were recorded in 2008 for individual clinical activities to a performance > 92% in perfusion, ulcer depth, infection assessment and management, offloading and education. Hospitalisation impacts recorded reductions of up to 64% in amputation rates / 100,000 population (p < 0.001) and 24% average length of stay (p < 0.001) Conclusion These findings support the use of multi-faceted strategies in diverse ambulatory services to standardise practice, improve diabetic foot complications management and positively impact on hospitalisation outcomes. As of October 2010, these strategies had been rolled out to over 25 ambulatory sites, representing 66% of Queensland Health districts, managing 1,820 patients and 13,380 occasions of service, including 543 healed ulcer patients. It is expected that this number will rise dramatically as an incentive payment for the use of the teleform is expanded.
Resumo:
Background: Hospitalisation for ambulatory care sensitive conditions (ACSHs) has become a recognised tool to measure access to primary care. Timely and effective outpatient care is highly relevant to refugee populations given the past exposure to torture and trauma, and poor access to adequate health care in their countries of origin and during flight. Little is known about ACSHs among resettled refugee populations. With the aim of examining the hypothesis that people from refugee backgrounds have higher ACSHs than people born in the country of hospitalisation, this study analysed a six-year state-wide hospital discharge dataset to estimate ACSH rates for residents born in refugee-source countries and compared them with the Australia-born population. Methods: Hospital discharge data between 1 July 1998 and 30 June 2004 from the Victorian Admitted Episodes Dataset were used to assess ACSH rates among residents born in eight refugee-source countries, and compare them with the Australia-born average. Rate ratios and 95% confidence levels were used to illustrate these comparisons. Four categories of ambulatory care sensitive conditions were measured: total, acute, chronic and vaccine-preventable. Country of birth was used as a proxy indicator of refugee status. Results: When compared with the Australia-born population, hospitalisations for total and acute ambulatory care sensitive conditions were lower among refugee-born persons over the six-year period. Chronic and vaccine-preventable ACSHs were largely similar between the two population groups. Conclusion: Contrary to our hypothesis, preventable hospitalisation rates among people born in refugee-source countries were no higher than Australia-born population averages. More research is needed to elucidate whether low rates of preventable hospitalisation indicate better health status, appropriate health habits, timely and effective care-seeking behaviour and outpatient care, or overall low levels of health care-seeking due to other more pressing needs during the initial period of resettlement. It is important to unpack dimensions of health status and health care access in refugee populations through ad-hoc surveys as the refugee population is not a homogenous group despite sharing a common experience of forced displacement and violence-related trauma.
Resumo:
Introduction Malnutrition is common among hospitalised patients, with poor follow-up of nutrition support post-discharge. Published studies on the efficacy of ambulatory nutrition support (ANS) for malnourished patients post-discharge are scarce. The aims of this study were to evaluate the rate of dietetics follow-up of malnourished patients post-discharge, before (2008) and after (2010) implementation of a new ANS service, and to evaluate nutritional outcomes post-implementation. Materials and Methods Consecutive samples of 261 (2008) and 163 (2010) adult inpatients referred to dietetics and assessed as malnourished using Subjective Global Assessment (SGA) were enrolled. All subjects received inpatient nutrition intervention and dietetic outpatient clinic follow-up appointments. For the 2010 cohort, ANS was initiated to provide telephone follow-up and home visits for patients who failed to attend the outpatient clinic. Subjective Global Assessment, body weight, quality of life (EQ-5D VAS) and handgrip strength were measured at baseline and five months post-discharge. Paired t-test was used to compare pre- and post-intervention results. Results In 2008, only 15% of patients returned for follow-up with a dietitian within four months post-discharge. After implementation of ANS in 2010, the follow-up rate was 100%. Mean weight improved from 44.0 ± 8.5kg to 46.3 ± 9.6kg, EQ-5D VAS from 61.2 ± 19.8 to 71.6 ± 17.4 and handgrip strength from 15.1 ± 7.1 kg force to 17.5 ± 8.5 kg force; p<0.001 for all. Seventy-four percent of patients improved in SGA score. Conclusion Ambulatory nutrition support resulted in significant improvements in follow-up rate, nutritional status and quality of life of malnourished patients post-discharge.
Resumo:
Introduction Malnutrition is common among hospitalised patients, with poor follow-up of nutrition support post-discharge. Published studies on the efficacy of ambulatory nutrition support (ANS) for malnourished patients post-discharge are scarce. The aims of this study were to evaluate the rate of dietetics follow-up of malnourished patients post-discharge, before (2008) and after (2010) implementation of a new ANS service, and to evaluate nutritional outcomes post-implementation. Materials and Methods Consecutive samples of 261 (2008) and 163 (2010) adult inpatients referred to dietetics and assessed as malnourished using Subjective Global Assessment (SGA) were enrolled. All subjects received inpatient nutrition intervention and dietetic outpatient clinic follow-up appointments. For the 2010 cohort, ANS was initiated to provide telephone follow-up and home visits for patients who failed to attend the outpatient clinic. Subjective Global Assessment, body weight, quality of life (EQ-5D VAS) and handgrip strength were measured at baseline and five months post-discharge. Paired t-test was used to compare pre- and post-intervention results. Results In 2008, only 15% of patients returned for follow-up with a dietitian within four months post-discharge. After implementation of ANS in 2010, the follow-up rate was 100%. Mean weight improved from 44.0 ± 8.5kg to 46.3 ± 9.6kg, EQ-5D VAS from 61.2 ± 19.8 to 71.6 ± 17.4 and handgrip strength from 15.1 ± 7.1 kg force to 17.5 ± 8.5 kg force; p<0.001 for all. Seventy-four percent of patients improved in SGA score. Conclusion Ambulatory nutrition support resulted in significant improvements in follow-up rate, nutritional status and quality of life of malnourished patients post-discharge.
Resumo:
Objective There are no objective ambulatory studies on the temporal relationship between reflux and cough in children. Commercial pHmetry loggers have slow capture rates (0.25 Hz) that limit objective quantification of reflux and cough. The authors aimed to evaluate if there is a temporal association between cough and acid pH in ambulatory children with chronic cough. setting and patients The authors studied children (aged <14 years) with chronic cough, suspected of acid reflux and considered for pHmetry using a specifically built ambulatory pHmetry–cough logger that enabled the simultaneous ambulatory recording of cough and pH with a fast (10 Hz) capture rate. Main outcome measures Coughs within (before and after) 10, 30, 60 and 120 s of a reflux episode (pH<4 for >0.5 s). Results Analysis of 5628 coughs in 20 children. Most coughs (83.9%) were independent of a reflux event. Cough–reflux (median 19, IQR 3–45) and reflux–cough (24.5, 13–51) sequences were equally likely to occur within 120 s. Within the 10 and 30 s time frame, reflux–cough (10 s=median 2.5, IQR 0–7.25; 30 s=6.5, 1.25–22.25) sequences were significantly less frequent than reflux–no cough (10 s=27, IQR 15–65; 30 s=24.5, 14.5–55.5) sequences, (p=0.0001 and p=0.001, respectively). No differences were found for 60 and 120 s time frame. Cough–reflux sequence (median 1.0, IQR 0–8) within 10 s was significantly less (p=0.0001) than no cough–reflux sequences (median 29.5, 15–67), within 30 s (p=0.006) and 60 s (p=0.048) but not within 120 s (p=0.47). Conclusions In children with chronic cough and suspected of having gastro-oesophageal reflux disease, the temporal relationship between acid reflux and cough is unlikely causal.
Resumo:
To evaluate the validity of the ActiGraph accelerometer for the measurement of physical activity intensity in children and adolescents with cerebral palsy (CP) using oxygen uptake (VO 2) as the criterion measure. Thirty children and adolescents with CP (mean age 12.6 ± 2.0 years) wore an ActiGraph 7164 and a Cosmed K4b 2 portable indirect calorimeter during four activities; quiet sitting, comfortable paced walking, brisk paced walking and fast paced walking. VO 2 was converted to METs and activity energy expenditure and classiWed as sedentary, light or moderate-to-vigorous intensity according to the conventions for children. Mean ActiGraph counts min -1 were classiWed as sedentary, light or moderate-to-vigorous (MVPA) intensity using four diVerent sets of cut-points. VO 2 and counts min¡1 increased signiWcantly with increases in walking speed (P < 0.001). Receiver operating characteristic (ROC) curve analysis indicated that, of the four sets of cut-points evaluated, the Evenson et al. (J Sports Sci 26(14):1557-1565, 2008) cut-points had the highest classiWcation accuracy for sedentary (92%) and MVPA (91%), as well as the second highest classiWcation accuracy for light intensity physical activity (67%). A ROC curve analysis of data from our participants yielded a CP-speciWc cut-point for MVPA that was lower than the Evenson cut-point (2,012 vs. 2,296 counts min¡1), however, the diVerence in classiWcation accuracy was not statistically signiWcant 94% (95% CI = 88.2-97.7%) vs. 91% (95% CI = 83.5-96.5%). In conclusion, among children and adolescents with CP, the ActiGraph is able to diVerentiate between diVerent intensities of walking. The use of the Evenson cut-points will permit the estimation of time spent in MVPA and allows comparisons to be made between activity measured in typically developing adolescents and adolescents with CP. © 2011 Springer-Verlag.
Resumo:
Editorial: This theme issue of BJSM presents key papers from the 3rd International Conference on Ambulatory Monitoring of Physical Activity and Movement (ICAMPAM). The July 2013 conference was hosted by the University of Massachusetts and was attended by researchers, clinicians, students and technology vendors for North America, Europe, Australasia and Asia...
Resumo:
Background Cancer itself can alter the metabolic and physiologic of the body's nutritional needs. As a result, some patients experience some degree of weight loss before the start of the treatment. Aim The aim of the study was to determine at which chemotherapy treatment cycle patients with cancer begin to exhibit signs and symptoms of malnutrition. Methods A prospective descriptive correlational design was used to assess the nutritional risk of 111 patients with cancer receiving chemotherapy in the ambulatory setting. The data were collected by using a nutritional screening tool. Findings The prevalence of malnutrition risk was 45% in patients undergoing the first cycle of chemotherapy. Patients who received the first three cycles of chemotherapy were 2.62 times more likely to develop malnutrition than those who received seven or more cycles. The risk of developing malnutrition varies depending on the type of cancer, the types of chemotherapy regime, the number of chemotherapy cycles, body mass index and the stage of cancer. Conclusion The study found about half of the patients had developed signs and symptoms of nutritional risk at cycle one. Hence, nutritional support may be required even before the start of chemotherapy.
Resumo:
BACKGROUND Physical therapy for youth with cerebral palsy (CP) who are ambulatory includes interventions to increase functional mobility and participation in physical activity (PA). Thus, reliable and valid measures are needed to document PA in youth with CP. OBJECTIVE The purpose of this study was to evaluate the inter-instrument reliability and concurrent validity of 3 accelerometer-based motion sensors with indirect calorimetry as the criterion for measuring PA intensity in youth with CP. METHODS Fifty-seven youth with CP (mean age=12.5 years, SD=3.3; 51% female; 49.1% with spastic hemiplegia) participated. Inclusion criteria were: aged 6 to 20 years, ambulatory, Gross Motor Function Classification System (GMFCS) levels I through III, able to follow directions, and able to complete the full PA protocol. Protocol activities included standardized activity trials with increasing PA intensity (resting, writing, household chores, active video games, and walking at 3 self-selected speeds), as measured by weight-relative oxygen uptake (in mL/kg/min). During each trial, participants wore bilateral accelerometers on the upper arms, waist/hip, and ankle and a portable indirect calorimeter. Intraclass coefficient correlations (ICCs) were calculated to evaluate inter-instrument reliability (left-to-right accelerometer placement). Spearman correlations were used to examine concurrent validity between accelerometer output (activity and step counts) and indirect calorimetry. Friedman analyses of variance with post hoc pair-wise analyses were conducted to examine the validity of accelerometers to discriminate PA intensity across activity trials. RESULTS All accelerometers exhibited excellent inter-instrument reliability (ICC=.94-.99) and good concurrent validity (rho=.70-.85). All accelerometers discriminated PA intensity across most activity trials. LIMITATIONS This PA protocol consisted of controlled activity trials. CONCLUSIONS Accelerometers provide valid and reliable measures of PA intensity among youth with CP.
Resumo:
OBJECTIVES: To develop and validate a wandering typology. ---------- DESIGN: Cross-sectional, correlational descriptive design. ---------- SETTING:: Twenty-two nursing homes and six assisted living facilities. ---------- PARTICIPANTS: One hundred forty-two residents with dementia who spoke English, met Diagnostic and Statistical Manual for Mental Disorders, Fourth Edition, criteria for dementia, scored less than 24 on the Mini-Mental State Examination (MMSE), were ambulatory (with or without assistive device), and maintained a stable regime of psychotropic medications were studied. ---------- MEASUREMENTS: Data on wandering were collected using direct observations, plotted serially according to rate and duration to yield 21 parameters, and reduced through factor analysis to four components: high rate, high duration, low to moderate rate and duration, and time of day. Other measures included the MMSE, Minimum Data Set 2.0 mobility items, Cumulative Illness Rating Scale—Geriatric, and tympanic body temperature readings. ---------- RESULTS: Three groups of wanderers were identified through cluster analysis: classic, moderate, and subclinical. MMSE, mobility, and cardiac and upper and lower gastrointestinal problems differed between groups of wanderers and in comparison with nonwanderers. ---------- CONCLUSION: Results have implications for improving identification of wanderers and treatment of possible contributing factors.
Resumo:
Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.