754 resultados para cross-over study


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present study examined the effect of carbohydrate supplementation on changes in neutrophil counts, and the plasma concentrations of cortisol and myoglobin after intense exercise. Eight well-trained male runners ran on a treadmill for 1 h at 85% maximal oxygen uptake on two separate occasions. In a double-blind cross-over design, subjects consumed either 750 ml of a 10% carbohydrate (CHO) drink or a placebo drink on each occasion. The order of the trials was counter-balanced. Blood was drawn immediately before and after exercise, and 1 h after exercise. Immediately after exercise, neutrophil counts (CHO, 49%; placebo, 65%; P<0.05), plasma concentrations of glucose (CHO, 43%; P<0.05), lactate (CHO, 130%; placebo, 130%; P<0.01), cortisol (CHO, 100%; placebo, 161%; P<0.01), myoglobin (CHO, 194%; placebo, 342%; P<0.01) all increased significantly. One hour post-exercise, plasma myoglobin concentration (CHO, 331%; placebo, 482%; P<0.01) and neutrophil count (CHO, 151%; placebo, 230% P<0.01) both increased further above baseline. CHO significantly attenuated plasma myoglobin concentration and the neutrophil count after exercise (P<0.01), but did not affect plasma cortisol concentration. The effects of CHO on plasma myoglobin concentration may be due to alterations in cytokine synthesis, insulin responses or myoglobin clearance rates from the bloodstream during exercise. Plasma cortisol responses to CHO during exercise may depend on the intensity of exercise, or the amount of CHO consumed. Lastly, cortisol appears to play a minor role in the mobilisation of neutrophils after intense exercise.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Alterations in cognitive function are characteristic of the aging process in humans and other animals. However, the nature of these age related changes in cognition is complex and is likely to be influenced by interactions between genetic predispositions and environmental factors resulting in dynamic fluctuations within and between individuals. These inter and intra-individual fluctuations are evident in both so-called normal cognitive aging and at the onset of cognitive pathology. Mild Cognitive Impairment (MCI), thought to be a prodromal phase of dementia, represents perhaps the final opportunity to mitigate cognitive declines that may lead to terminal conditions such as dementia. The prognosis for people with MCI is mixed with the evidence suggesting that many will remain stable within 10-years of diagnosis, many will improve, and many will transition to dementia. If the characteristics of people who do not progress to dementia from MCI can be identified and replicated in others it may be possible to reduce or delay dementia onset, thus reducing a growing personal and public health burden. Furthermore, if MCI onset can be prevented or delayed, the burden of cognitive decline in aging populations worldwide may be reduced. A cognitive domain that is sensitive to the effects of advancing age, and declines in which have been shown to presage the onset of dementia in MCI patients, is executive function. Moreover, environmental factors such as diet and physical activity have been shown to affect performance on tests of executive function. For example, improvements in executive function have been demonstrated as a result of increased aerobic and anaerobic physical activity and, although the evidence is not as strong, findings from dietary interventions suggest certain nutrients may preserve or improve executive functions in old age. These encouraging findings have been demonstrated in older adults with MCI and their non-impaired peers. However, there are some gaps in the literature that need to be addressed. For example, little is known about the effect on cognition of an interaction between diet and physical activity. Both are important contributors to health and wellbeing, and a growing body of evidence attests to their importance in mental and cognitive health in aging individuals. Yet physical activity and diet are rarely considered together in the context of cognitive function. There is also little known about potential underlying biological mechanisms that might explain the physical activity/diet/cognition relationship. The first aim of this program of research was to examine the individual and interactive role of physical activity and diet, specifically long chain polyunsaturated fatty acid consumption(LCn3) as predictors of MCI status. The second aim is to examine executive function in MCI in the context of the individual and interactive effects of physical activity and LCn3.. A third aim was to explore the role of immune and endocrine system biomarkers as possible mediators in the relationship between LCn3, physical activity and cognition. Study 1a was a cross-sectional analysis of MCI status as a function of erythrocyte proportions of an interaction between physical activity and LCn3. The marine based LCn3s eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) have both received support in the literature as having cognitive benefits, although comparisons of the relative benefits of EPA or DHA, particularly in relation to the aetiology of MCI, are rare. Furthermore, a limited amount of research has examined the cognitive benefits of physical activity in terms of MCI onset. No studies have examined the potential interactive benefits of physical activity and either EPA or DHA. Eighty-four male and female adults aged 65 to 87 years, 50 with MCI and 34 without, participated in Study 1a. A logistic binary regression was conducted with MCI status as a dependent variable, and the individual and interactive relationships between physical activity and either EPA or DHA as predictors. Physical activity was measured using a questionnaire and specific physical activity categories were weighted according to the metabolic equivalents (METs) of each activity to create a physical activity intensity index (PAI). A significant relationship was identified between MCI outcome and the interaction between the PAI and EPA; participants with a higher PAI and higher erythrocyte proportions of EPA were more likely to be classified as non-MCI than their less active peers with less EPA. Study 1b was a randomised control trial using the participants from Study 1a who were identified with MCI. Given the importance of executive function as a determinant of progression to more severe forms of cognitive impairment and dementia, Study 1b aimed to examine the individual and interactive effect of physical activity and supplementation with either EPA or DHA on executive function in a sample of older adults with MCI. Fifty male and female participants were randomly allocated to supplementation groups to receive 6-months of supplementation with EPA, or DHA, or linoleic acid (LA), a long chain polyunsaturated omega-6 fatty acid not known for its cognitive enhancing properties. Physical activity was measured using the PAI from Study 1a at baseline and follow-up. Executive function was measured using five tests thought to measure different executive function domains. Erythrocyte proportions of EPA and DHA were higher at follow-up; however, PAI was not significantly different. There was also a significant improvement in three of the five executive function tests at follow-up. However, regression analyses revealed that none of the variance in executive function at follow-up was predicted by EPA, DHA, PAI, the EPA by PAI interaction, or the DHA by PAI interaction. The absence of an effect may be due to a small sample resulting in limited power to find an effect, the lack of change in physical activity over time in terms of volume and/or intensity, or a combination of both reduced power and no change in physical activity. Study 2a was a cross-sectional study using cognitively unimpaired older adults to examine the individual and interactive effects of LCn3 and PAI on executive function. Several possible explanations for the absence of an effect were identified. From this consideration of alternative explanations it was hypothesised that post-onset interventions with LCn3 either alone or in interation with self-reported physical activity may not be beneficial in MCI. Thus executive function responses to the individual and interactive effects of physical activity and LCn3 were examined in a sample of older male and female adults without cognitive impairment (n = 50). A further aim of study 2a was to operationalise executive function using principal components analysis (PCA) of several executive function tests. This approach was used firstly as a data reduction technique to overcome the task impurity problem, and secondly to examine the executive function structure of the sample for evidence of de-differentiation. Two executive function components were identified as a result of the PCA (EF 1 and EF 2). However, EPA, DHA, the PAI, or the EPA by PAI or DHA by PAI interactions did not account for any variance in the executive function components in subsequent hierarchical multiple regressions. Study 2b was an exploratory correlational study designed to explore the possibility that immune and endocrine system biomarkers may act as mediators of the relationship between LCn3, PAI, the interaction between LCn3 and PAI, and executive functions. Insulin-like growth factor-1 (IGF-1), an endocrine system growth hormone, and interleukin-6 (IL-6) an immune system cytokine involved in the acute inflammatory response, have both been shown to affect cognition including executive functions. Moreover, IGF-1 and IL-6 have been shown to be antithetical in so far as chronically increased IL-6 has been associated with reduced IGF-1 levels, a relationship that has been linked to age related morbidity. Further, physical activity and LCn3 have been shown to modulate levels of both IGF-1 and IL-6. Thus, it is possible that the cognitive enhancing effects of LCn3, physical activity or their interaction are mediated by changes in the balance between IL-6 and IGF-1. Partial and non-parametric correlations were conducted in a subsample of participants from Study 2a (n = 13) to explore these relationships. Correlations of interest did not reach significance; however, the coefficients were quite large for several relationships suggesting studies with larger samples may be warranted. In summary, the current program of research found some evidence supporting an interaction between EPA, not DHA, and higher energy expenditure via physical activity in differentiating between older adults with and without MCI. However, a RCT examining executive function in older adults with MCI found no support for increasing EPA or DHA while maintaining current levels of energy expenditure. Furthermore, a cross-sectional study examining executive function in older adults without MCI found no support for better executive function performance as a function of increased EPA or DHA consumption, greater energy expenditure via physical activity or an interaction between physical activity and either EPA or DHA. Finally, an examination of endocrine and immune system biomarkers revealed promising relationships in terms of executive function in non-MCI older adults particularly with respect to LCn3 and physical activity. Taken together, these findings demonstrate a potential benefit of increasing physical activity and LCn3 consumption, particularly EPA, in mitigating the risk of developing MCI. In contrast, no support was found for a benefit to executive function as a result of increased physical activity, LCn3 consumption or an interaction between physical activity and LCn3, in participants with and without MCI. These results are discussed with reference to previous findings in the literature including possible limitations and opportunities for future research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The increasing prevalence of obesity in society has been associated with a number of atherogenic risk factors such as insulin resistance. Aerobic training is often recommended as a strategy to induce weight loss, with a greater impact of high-intensity levels on cardiovascular function and insulin sensitivity, and a greater impact of moderate-intensity levels on fat oxidation. Anaerobic high-intensity (supramaximal) interval training has been advocated to improve cardiovascular function, insulin sensitivity and fat oxidation. However, obese individuals tend to have a lower tolerance of high-intensity exercise due to discomfort. Furthermore, some obese individuals may compensate for the increased energy expenditure by eating more and/or becoming less active. Recently, both moderate- and high-intensity aerobic interval training have been advocated as alternative approaches. However, it is still uncertain as to which approach is more effective in terms of increasing fat oxidation given the issues with levels of fitness and motivation, and compensatory behaviours. Accordingly, the objectives of this thesis were to compare the influence of moderate- and high-intensity interval training on fat oxidation and eating behaviour in overweight/obese men. Two exercise interventions were undertaken by 10-12 overweight/obese men to compare their responses to study variables, including fat oxidation and eating behaviour during moderate- and high-intensity interval training (MIIT and HIIT). The acute training intervention was a methodological study designed to examine the validity of using exercise intensity from the graded exercise test (GXT) - which measured the intensity that elicits maximal fat oxidation (FATmax) - to prescribe interval training during 30-min MIIT. The 30-min MIIT session involved 5-min repetitions of workloads 20% below and 20% above the FATmax. The acute intervention was extended to involve HIIT in a cross-over design to compare the influence of MIIT and HIIT on eating behaviour using subjective appetite sensation and food preference through the liking and wanting test. The HIIT consisted of 15-sec interval training at 85 %VO2peak interspersed by 15-sec unloaded recovery, with a total mechanical work equal to MIIT. The medium term training intervention was a cross-over 4-week (12 sessions) MIIT and HIIT exercise training with a 6-week detraining washout period. The MIIT sessions consisted of 5-min cycling stages at ±20% of mechanical work at 45 %VO2peak, and the HIIT sessions consisted of repetitive 30-sec work at 90 %VO2peak and 30-sec interval rests, during identical exercise sessions of between 30 and 45 mins. Assessments included a constant-load test (45 %VO2peak for 45 mins) followed by 60-min recovery at baseline and the end of 4-week training, to determine fat oxidation rate. Participants’ responses to exercise were measured using blood lactate (BLa), heart rate (HR) and rating of perceived exertion (RPE) and were measured during the constant-load test and in the first intervention training session of every week during training. Eating behaviour responses were assessed by measuring subjective appetite sensations, liking and wanting and ad libitum energy intake. Results of the acute intervention showed that FATmax is a valid method to estimate VO2 and BLa, but is not valid to estimate HR and RPE in the MIIT session. While the average rate of fat oxidation during 30-min MIIT was comparable with the rate of fat oxidation at FATmax (0.16 ±0.09 and 0.14 ±0.08 g/min, respectively), fat oxidation was significantly higher at minute 25 of MIIT (P≤0.01). In addition, there was no significant difference between MIIT and HIIT in the rate of appetite sensations after exercise, but there was a tendency towards a lower rate of hunger after HIIT. Different intensities of interval exercise also did not affect explicit liking or implicit wanting. Results of the medium-term intervention indicated that current interval training levels did not affect body composition, fasting insulin and fasting glucose. Maximal aerobic capacity significantly increased (P≤0.01) (2.8 and 7.0% after MIIT and HIIT respectively) during GXT, and fat oxidation significantly increased (P≤0.01) (96 and 43% after MIIT and HIIT respectively) during the acute constant-load exercise test. RPE significantly decreased after HIIT greater than MIIT (P≤0.05), and the decrease in BLa was greater during the constant-load test after HIIT than MIIT, but this difference did not reach statistical significance (P=0.09). In addition, following constant-load exercise, exercise-induced hunger and desire to eat decreased after HIIT greater than MIIT but were not significant (p value for desire to eat was 0.07). Exercise-induced liking of high-fat sweet (HFSW) and high-fat non-sweet (HFNS) foods increased after MIIT and decreased after HIIT (p value for HFNS was 0.09). The intervention explained 12.4% of the change in fat intake (p = 0.07). This research is significant in that it confirmed two points in the acute study. While the rate of fat oxidation increased during MIIT, the average rate of fat oxidation during 30-min MIIT was comparable with the rate of fat oxidation at FATmax. In addition, manipulating the intensity of acute interval exercise did not affect appetite sensations and liking and wanting. In the medium-term intervention, constant-load exercise-induced fat oxidation significantly increased after interval training, independent of exercise intensity. In addition, desire to eat, explicit liking for HFNS and fat intake collectively confirmed that MIIT is accompanied by a greater compensation of eating behaviour than HIIT. Findings from this research will assist in developing exercise strategies to provide obese men with various training options. In addition, the finding that overweight/obese men expressed a lower RPE and decreased BLa after HIIT compared with MIIT is contrary to the view that obese individuals may not tolerate high-intensity interval training. Therefore, high-intensity interval training can be advocated among the obese adult male population. Future studies may extend this work by using a longer-term intervention.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To investigate the association between conjunctival ultraviolet autofluorescence (UVAF), a biomarker of ocular ultraviolet radiation (UVR) exposure, and prevalent pterygium. Methods: We conducted a cross-sectional study on Norfolk Island, South Pacific. All permanent residents aged ‡15 were invited to participate. Participants completed a sun exposure questionnaire and underwent autorefraction and slit lamp biomicroscope examination. Area of conjunctival UVAF (sum of temporal ⁄ nasal area in right and left eyes) was determined using computerized methods. Multivariate logistic and linear regression models were used to estimate the associations with pterygia and UVAF, respectively. Results: Of 641 participants, 70 people (10.9%) had pterygium in one or both eyes, and prevalence was higher in males (15.0% versus 7.7%, p = 0.003). Significant independent associations with pterygium in any eye were UVAF (per 10 mm2) [odds ratio (OR) 1.16, 95% confidence interval (CI) 1.16–1.28, p = 0.002], tanning skin phenotype (OR 2.17,1.20–3.92, p = 0.010) and spending more than three-quarters of the day outside (OR 2.22, 1.20–4.09, p = 0.011). Increasing quartile of UVAF was associated with increased risk of pterygium following adjustment of age, sex and time outdoors (pTrend = 0.002). Independent associations with increasing UVAF (per 10 mm2) were decreasing age, time outdoors, skin type and male gender (all p < 0.001). UVAF area correlated well with the duration of outdoor activity (pTrend < 0.001). Conclusion: Pterygium occurs in approximately one-tenth of Norfolk Islanders. Increasing conjunctival UVAF is associated with prevalent pterygia, confirming earlier epidemiological, laboratory and ray-tracing studies that pterygia are associated with UVR. Protection from the sun should be encouraged to reduce the prevalence of pterygium in the community.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Nutrition screening is usually administered by nurses. However, most studies on nutrition screening tools have not used nurses to validate the tools. The 3-Minute Nutrition Screening (3-MinNS) assesses weight loss, dietary intake and muscle wastage, with the composite score of each used to determine risk of malnutrition. The aim of the study was to determine the validity and reliability of 3-MinNS administered by nurses, who are the intended assessors. Methods: In this cross sectional study, three ward-based nurses screened 121 patients aged 21 years and over using 3-MinNS in three wards within 24 hours of admission. A dietitian then assessed the patients’ nutritional status using Subjective Global Assessment within 48 hours of admission, whilst blinded to the results of the screening. To assess the reliability of 3-MinNS, 37 patients screened by the first nurse were re-screened by a second nurse within 24 hours, who was blinded to the results of the first nurse. The sensitivity, specificity and best cutoff score for 3-MinNS were determined using the Receiver Operator Characteristics Curve. Results: The best cutoff score to identify all patients at risk of malnutrition using 3-MinNS was three, with sensitivity of 89% and specificity of 88%. This cutoff point also identified all (100%) severely malnourished patients. There was strong correlation between 3-MinNS and SGA (r=0.78, p<0.001). The agreement between two nurses conducting the 3-MinNS tool was 78.3%. Conclusion: 3-Minute Nutrition Screening is a valid and reliable tool for nurses to identify patients at risk of malnutrition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Nutrition screening is usually administered by nurses. However, most studies on nutrition screening tools have not used nurses to validate the tools. The 3-Minute Nutrition Screening (3-MinNS) assesses weight loss, dietary intake and muscle wastage, with the composite score of each used to determine risk of malnutrition. The aim of the study was to determine the validity and reliability of 3-MinNS administered by nurses, who are the intended assessors. Methods In this cross sectional study, three ward-based nurses screened 121 patients aged 21 years and over using 3-MinNS in three wards within 24 hours of admission. A dietitian then assessed the patients’ nutritional status using Subjective Global Assessment within 48 hours of admission, whilst blinded to the results of the screening. To assess the reliability of 3-MinNS, 37 patients screened by the first nurse were re-screened by a second nurse within 24 hours, who was blinded to the results of the first nurse. The sensitivity, specificity and best cutoff score for 3-MinNS were determined using the Receiver Operator Characteristics Curve. Results The best cutoff score to identify all patients at risk of malnutrition using 3-MinNS was three, with sensitivity of 89% and specificity of 88%. This cutoff point also identified all (100%) severely malnourished patients. There was strong correlation between 3-MinNS and SGA (r=0.78, p<0.001). The agreement between two nurses conducting the 3-MinNS tool was 78.3%. Conclusion 3-Minute Nutrition Screening is a valid and reliable tool for nurses to identify patients at risk of malnutrition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background & Aims: Access to sufficient amounts of safe and culturally-acceptable foods is a fundamental human right. Food security exists when all people, at all times, have physical, social, and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life. Food insecurity therefore occurs when the availability or access to sufficient amounts of nutritionally-adequate, culturally-appropriate and safe foods, or, the ability to acquire such foods in socially-acceptable ways, is limited. Food insecurity may result in significant adverse effects for the individual and these outcomes may vary between adults and children. Among adults, food insecurity may be associated with overweight or obesity, poorer self-rated general health, depression, increased health-care utilisation and dietary intakes less consistent with national recommendations. Among children, food insecurity may result in poorer self or parent-reported general health, behavioural problems, lower levels of academic achievement and poor social outcomes. The majority of research investigating the potential correlates of food insecurity has been undertaken in the United States (US), where regular national screening for food insecurity is undertaken using a comprehensive multi-item measurement. In Australia, screening for food insecurity takes place on a three yearly basis via the use of a crude, single-item included in the National Health Survey (NHS). This measure has been shown to underestimate the prevalence of food insecurity by 5%. From 1995 – 2004, the prevalence of food insecurity among the Australian population remained stable at 5%. Due to the perceived low prevalence of this issue, screening for food insecurity was not undertaken in the most recent NHS. Furthermore, there are few Australian studies investigating the potential determinants of food insecurity and none investigating potential outcomes among adults and children. This study aimed to examine these issues by a) investigating the prevalence of food insecurity among households residing in disadvantaged urban areas and comparing prevalence rates estimated by the more comprehensive 18-item and 6-item United States Department of Agriculture (USDA) Food Security Survey Module (FSSM) to those estimated by the current single-item measure used for surveillance in Australia and b) investigating the potential determinants and outcomes of food insecurity, Methods: A comprehensive literature review was undertaken to investigate the potential determinants and consequences of food insecurity among developed countries. This was followed by a cross-sectional study in which 1000 households from the most disadvantaged 5% of Brisbane areas were sampled and data collected via mail-based survey (final response rate = 53%, n = 505). Data were collected for food security status, sociodemographic characteristics (household income, education, age, gender, employment status, housing tenure and living arrangements), fruit and vegetable intakes, meat and take-away consumption, presence of depressive symptoms, presence of chronic disease and body mass index (BMI) among adults. Among children, data pertaining to BMI, parent-reported general health, days away from school and activities and behavioural problems were collected. Rasch analysis was used to investigate the psychometric properties of the 18-, 10- and 6-item adaptations of the USDA-FSSM, and McNemar's test was used to investigate the difference in the prevalence of food insecurity as measured by these three adaptations compared to the current single-item measure used in Australia. Chi square and logistic regression were used to investigate the differences in dietary and health outcomes among adults and health and behavioural outcomes among children. Results were adjusted for equivalised household income and, where necessary, for indigenous status, education and family type. Results: Overall, 25% of households in these urbanised-disadvantaged areas reported experiencing food insecurity; this increased to 34% when only households with children were analysed. The current reliance on a single-item measure to screen for food insecurity may underestimate the true burden among the Australian population, as this measure was shown to significantly underestimate the prevalence of food insecurity by five percentage points. Internationally, major potential determinants of food insecurity included poverty and indicators of poverty, such as low-income, unemployment and lower levels of education. Ethnicity, age, transportation and cooking and financial skills were also found to be potential determinants of food insecurity. Among Australian adults in disadvantaged urban areas, food insecurity was associated with a three-fold increase in experiencing poorer self-rated general health and a two-to-five-fold increase in the risk of depression. Furthermore, adults from food insecure households were twoto- three times more likely to have seen a general practitioner and/or been admitted to hospital within the previous six months, compared to their food secure counterparts. Weight status and intakes of fruits, vegetables and meat were not associated with food insecurity. Among Australian households with children, those in the lowest tertile were over 16 times more likely to experience food insecurity compared to those in the highest tertile for income. After adjustment for equivalised household income, children from food insecure households were three times more likely to have missed days away from school or other activities. Furthermore, children from food insecure households displayed a two-fold increase in atypical emotions and behavioural difficulties. Conclusions: Food insecurity is an important public health issue and may contribute to the burden on the health care system through its associations with depression and increased health care utilisation among adults and behavioural and emotional problems among children. Current efforts to monitor food insecurity in Australia do not occur frequently and use a tool that may underestimate the prevalence of food insecurity. Efforts should be made to improve the regularity of screening for food insecurity via the use of a more accurate screening measure. Most of the current strategies that aim to alleviate food insecurity do not sufficiently address the issue of insufficient financial resources for acquiring food; a factor which is an important determinant of food insecurity. Programs to address this issue should be developed in collaboration with groups at higher risk of developing food insecurity and should incorporate strategies to address the issue of low income as a barrier to food acquisition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Cancer-related malnutrition is associated with increased morbidity, poorer tolerance of treatment, decreased quality of life, increased hospital admissions, and increased health care costs (Isenring et al., 2013). This study’s aim was to determine whether a novel, automated screening system was a useful tool for nutrition screening when compared against a full nutrition assessment using the Patient-Generated Subjective Global Assessment (PG-SGA) tool. Methods A single site, observational, cross-sectional study was conducted in an outpatient oncology day care unit within a Queensland tertiary facility, with three hundred outpatients (51.7% male, mean age 58.6 ± 13.3 years). Eligibility criteria: ≥18 years, receiving anticancer treatment, able to provide written consent. Patients completed the Malnutrition Screening Tool (MST). Nutritional status was assessed using the PG-SGA. Data for the automated screening system was extracted from the pharmacy software program Charm. This included body mass index (BMI) and weight records dating back up to six months. Results The prevalence of malnutrition was 17%. Any weight loss over three to six weeks prior to the most recent weight record as identified by the automated screening system relative to malnutrition resulted in 56.52% sensitivity, 35.43% specificity, 13.68% positive predictive value, 81.82% negative predictive value. MST score 2 or greater was a stronger predictor of nutritional risk relative to PG-SGA classified malnutrition (70.59% sensitivity, 69.48% specificity, 32.14% positive predictive value, 92.02% negative predictive value). Conclusions Both the automated screening system and the MST fell short of the accepted professional standard for sensitivity (80%) or specificity (60%) when compared to the PG-SGA. However, although the MST remains a better predictor of malnutrition in this setting, uptake of this tool in the Oncology Day Care Unit remains challenging.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: The objective of the study was to assess the bioequivalence of two tablet formulations of capecitabine and to explore the effect of age, gender, body surface area and creatinine clearance on the systemic exposure to capecitabine and its metabolites. Methods: The study was designed as an open, randomized two-way crossover trial. A single oral dose of 2000 mg capecitabine was administered on two separate days to 25 patients with solid tumors. On one day, the patients received four 500-mg tablets of formulation B (test formulation) and on the other day, four 500-mg tablets of formulation A (reference formulation). The washout period between the two administrations was between 2 and 8 days. After each administration, serial blood and urine samples were collected for up to 12 and 24 h, respectively. Unchanged capecitabine and its metabolites were determined in plasma using LC/MS-MS and in urine by NMRS. Results: Based on the primary pharmacokinetic parameter, AUC(0-∞) of 5'-DFUR, equivalence was concluded for the two formulations, since the 90% confidence interval of the estimate of formulation B relative to formulation A of 97% to 107% was within the acceptance region 80% to 125%. There was no clinically significant difference between the t(max) for the two formulations (median 2.1 versus 2.0 h). The estimate for C(max) was 111% for formulation B compared to formulation A and the 90% confidence interval of 95% to 136% was within the reference region 70% to 143%. Overall, these results suggest no relevant difference between the two formulations regarding the extent to which 5'-DFUR reached the systemic circulation and the rate at which 5'-DFUR appeared in the systemic circulation. The overall urinary excretions were 86.0% and 86.5% of the dose, respectively, and the proportion recovered as each metabolite was similar for the two formulations. The majority of the dose was excreted as FBAL (61.5% and 60.3%), all other chemical species making a minor contribution. Univariate and multivariate regression analysis to explore the influence of age, gender, body surface area and creatinine clearance on the log-transformed pharmacokinetic parameters AUC(0-∞) and C(max) of capecitabine and its metabolites revealed no clinically significant effects. The only statistically significant results were obtained for AUC(0-∞) and C(max) of intact drug and for C(max) of FBAL, which were higher in females than in males. Conclusion: The bioavailability of 5'-DFUR in the systemic circulation was practically identical after administration of the two tablet formulations. Therefore, the two formulations can be regarded as bioequivalent. The variables investigated (age, gender, body surface area, and creatinine clearance) had no clinically significant effect on the pharmacokinetics of capecitabine or its metabolites.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: In single-group studies, chromosomal rearrangements of the anaplastic lymphoma kinase gene (ALK ) have been associated with marked clinical responses to crizotinib, an oral tyrosine kinase inhibitor targeting ALK. Whether crizotinib is superior to standard chemotherapy with respect to efficacy is unknown. METHODS: We conducted a phase 3, open-label trial comparing crizotinib with chemotherapy in 347 patients with locally advanced or metastatic ALK-positive lung cancer who had received one prior platinum-based regimen. Patients were randomly assigned to receive oral treatment with crizotinib (250 mg) twice daily or intravenous chemotherapy with either pemetrexed (500 mg per square meter of body-surface area) or docetaxel (75 mg per square meter) every 3 weeks. Patients in the chemotherapy group who had disease progression were permitted to cross over to crizotinib as part of a separate study. The primary end point was progression-free survival. RESULTS: The median progression-free survival was 7.7 months in the crizotinib group and 3.0 months in the chemotherapy group (hazard ratio for progression or death with crizotinib, 0.49; 95% confidence interval [CI], 0.37 to 0.64; P<0.001). The response rates were 65% (95% CI, 58 to 72) with crizotinib, as compared with 20% (95% CI, 14 to 26) with chemotherapy (P<0.001). An interim analysis of overall survival showed no significant improvement with crizotinib as compared with chemotherapy (hazard ratio for death in the crizotinib group, 1.02; 95% CI, 0.68 to 1.54; P=0.54). Common adverse events associated with crizotinib were visual disorder, gastrointestinal side effects, and elevated liver aminotransferase levels, whereas common adverse events with chemotherapy were fatigue, alopecia, and dyspnea. Patients reported greater reductions in symptoms of lung cancer and greater improvement in global quality of life with crizotinib than with chemotherapy. CONCLUSIONS: Crizotinib is superior to standard chemotherapy in patients with previously treated, advanced non-small-cell lung cancer with ALK rearrangement. (Funded by Pfizer; ClinicalTrials.gov number, NCT00932893.) Copyright © 2013 Massachusetts Medical Society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To evaluate responses to self-administered brief questions regarding consumption of vegetables and fruit by comparison with blood levels of serum carotenoids and red-cell folate. Design: A cross-sectional study in which participants reported their usual intake of fruit and vegetables in servings per day, and serum levels of five carotenoids (α-carotene, β-carotene, β-cryptoxanthin, lutein/zeaxanthin and lycopene) and red-cell folate were measured. Serum carotenoid levels were determined by high-performance liquid chromatography, and red-cell folate by an automated immunoassay system. Settings and subjects: Between October and December 2000, a sample of 1598 adults aged 25 years and over, from six randomly selected urban centres in Queensland, Australia, were examined as part of a national study conducted to determine the prevalence of diabetes and associated cardiovascular risk factors. Results: Statistically significant (P<0.01) associations with vegetable and fruit intake (categorised into groups: ≤1 serving, 2–3 servings and ≥4 servings per day) were observed for α-carotene, β-carotene, β-cryptoxanthin, lutein/zeaxanthin and red-cell folate. The mean level of these carotenoids and of red-cell folate increased with increasing frequency of reported servings of vegetables and fruit, both before and after adjusting for potential confounding factors. A significant association with lycopene was observed only for vegetable intake before adjusting for confounders. Conclusions: These data indicate that brief questions may be a simple and valuable tool for monitoring vegetable and fruit intake in this population.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Whole body cryotherapy (WBC) is the therapeutic application of extreme cold air for a short duration. Minimal evidence is available for determining optimal exposure time. Purpose: To explore whether the length of WBC exposure induces differential changes in inflammatory markers, tissue oxygenation, skin and core temperature, thermal sensation and comfort. Method: This study was a randomised cross over design with participants acting as their own control. Fourteen male professional first team super league rugby players were exposed to 1, 2, and 3 minutes of WBC at -135°C. Testing took place the day after a competitive league fixture, each exposure separated by seven days. Results: No significant changes were found in the inflammatory cytokine interleukin six. Significant reductions (p<0.05) in deoxyhaemoglobin for gastrocnemius and vastus lateralis were found. In vastus lateralis significant reductions (p<0.05) in oxyhaemoglobin and tissue oxygenation index (p<0.05) were demonstrated. Significant reductions (p<0.05) in skin temperature were recorded. No significant changes were recorded in core temperature. Significant reductions (p<0.05) in thermal sensation and comfort were recorded. Conclusion: Three brief exposures to WBC separated by 1 week are not sufficient to induce physiological changes in IL-6 or core temperature. There are however significant changes in tissue oxyhaemoglobin, deoxyhaemoglobin, tissue oxygenation index, skin temperature and thermal sensation. We conclude that a 2 minute WBC exposure was the optimum exposure length at temperatures of -135°C and could be applied as the basis for future studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate patterns of physical activity (PA), the prevalence of physical inactivity and the relationships between PA and sociodemographic, clinical and biochemical parameters among Sri Lankan adults. DESIGN: Descriptive cross-sectional study. SETTING: Nationally representative population-based survey conducted in Sri Lanka. SUBJECTS: Data on PA and associated details were obtained from 5000 adults. PA was assessed using the International Physical Activity Questionnaire (short-form). A binary logistic regression analysis was performed using the dichotomous variable ‘health-enhancing PA’ (05‘active’, 15‘inactive’). RESULTS: Sample size was 4485. Mean age was 46.1 (SD 15.1) years, 39.5% were males. The mean weekly total MET (metabolic equivalents of task) minutes of PA among the study population was 4703 (SD 4369). Males (5464 (SD 5452)) had a significantly higher weekly total MET minutes than females (4205 (SD 3394); P,0.001). Rural adults (5175 (SD 4583)) were significantly more active than urban adults (2956 (SD 2847); P<0.001). Tamils had the highest mean weekly total MET minutes among ethnicities. Those with tertiary education had lowest mean weekly total MET minutes. In all adults 60.0% were in the ‘highly active’ category, while only 11.0% were ‘inactive’ (males 14.6%, females 8.7%; P<0.001). Of the ‘highly active’ adults, 85.8% were residing in rural areas. Results of the binary logistic regression analysis indicated that female gender (OR52?1), age .70 years (OR53.8), urban living (OR52.5), Muslim ethnicity (OR52.7), tertiary education (OR53.6), obesity (OR51.8), diabetes (OR51.6), hypertension (OR51.2) and metabolic syndrome (OR51.3) were all associated with significantly increased odds of being physically ‘inactive’. CONCLUSIONS: The majority of Sri Lankan adults were ‘highly active’ physically. Female gender, older age, urban living, Muslim ethnicity and tertiary education were all significant predictors of physical inactivity. Physical inactivity was associated with obesity, diabetes, hypertension and metabolic syndrome.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose Paper-based nutrition screening tools can be challenging to implement in the ambulatory oncology setting. The aim of this study was to determine the validity of the Malnutrition Screening Tool (MST) and a novel, automated nutrition screening system compared to a ‘gold standard’ full nutrition assessment using the Patient-Generated Subjective Global Assessment (PG-SGA). Methods An observational, cross-sectional study was conducted in an outpatient oncology day treatment unit (ODTU) within an Australian tertiary health service. Eligibility criteria were as follows: ≥18 years, receiving outpatient anticancer treatment and English literate. Patients self-administered the MST. A dietitian assessed nutritional status using the PGSGA, blinded to the MST score. Automated screening system data were extracted from an electronic oncology prescribing system. This system used weight loss over 3 to 6 weeks prior to the most recent weight record or age-categorised body mass index (BMI) to identify nutritional risk. Sensitivity and specificity against PG-SGA (malnutrition) were calculated using contingency tables and receiver operating curves. Results There were a total of 300 oncology outpatients (51.7 % male, 58.6±13.3 years). The area under the curve (AUC) for weight loss alone was 0.69 with a cut-off value of ≥1 % weight loss yielding 63 % sensitivity and 76.7 % specificity. MST (score ≥2) resulted in 70.6 % sensitivity and 69.5 % specificity, AUC 0.77. Conclusions Both the MST and the automated method fell short of the accepted professional standard for sensitivity (~≥80 %) derived from the PG-SGA. Further investigation into other automated nutrition screening options and the most appropriate parameters available electronically is warranted to support targeted service provision.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE To compare the physical activity (PA) patterns and the hypothesized psychosocial and environmental determinants of PA in an ethnically diverse sample of obese and non-obese middle school children. DESIGN Cross-sectional study. SUBJECTS One-hundred and thirty-three non-obese and 54 obese sixth grade children (mean age of 11.4 +/-0.6). Obesity status determined using the age-, race- and gender-specific 95th percentile for BMI from NHANES-1. MEASUREMENTS Objective measurements were collected of PA over a 7-day period using the CSA 7164 accelerometer: total daily counts; daily moderate (3-5.9 METs) physical activity (MPA); daily vigorous physical activity (greater than or equal to 6 METs; VPA); and weekly number of 5, 10 and 20 min bouts of moderate-to-vigorous physical activity (greater than or equal to 3 METs, MVPA). Self-report measures were collected of PA self-efficacy; social influences regarding PA, beliefs about PA outcomes; perceived PA levels of parents and peers, access to sporting and/or fitness equipment at home, involvement in community-based PA organizations; participation in community sports teams; and hours spent watching television or playing video games. RESULTS Compared to their non-obese counterparts, obese children exhibited significantly lower daily accumulations of total counts, MPA and VPA as well as significantly fewer 5, 10 and 20 min bouts of MVPA. Obese children reported significantly lower levels of PA self-efficacy, were involved in significantly fewer community organizations promoting PA and were significantly less likely to report their father or male guardian as physically active. CONCLUSIONS The results are consistent with the hypothesis that physical inactivity is an important contributing factor in the maintenance of childhood obesity. Interventions to promote PA in obese children should endeavor to boost self-efficacy perceptions regarding exercise, increase awareness of, and access to, community PA outlets, and increase parental modeling of PA.