704 resultados para dietary restriction
Resumo:
Photographic records of dietary intake (PhDRs) are an innovative method for the dietary assessment and may alleviate the burden of recording intake compared to traditional methods of recording intake. While the performance of PhDRs has been evaluated, no investigation into the application of this method had occurre within dietetic practice. This study examined the attitudes of dietitians towards the use of PhDRs in the provision of nutrition care. A web-based survey on the practices and beliefs with regards to technology use among Dietitians Association of Australia members was conducted in August 2011. Of the 87 dietitians who responded, 86% assessed the intakes of clients as part of individualised medical nutrition therapy, with the diet history the most common method used. The majority (91%) of dietitians surveyed believed that a PhDR would be of use in their current practice to estimate intake. Information contained in the PhDR would primarily be used to obtain a qualitative evaluation of diet (84%) or to supplement an existing assessment method (69%), as opposed to deriving an absolute measure of nutrient intake (31%). Most (87%) indicated that a PhDR would also be beneficial in both the delivery of the intervention and to evaluate and monitor goals and outcomes, while only 46% felt that a PhDR would assist in determining the nutrition diagnosis. This survey highlights the potential for the use of PhDRs within practice. Future endeavours lie in establishing resources which support the inclusion of PhDRs within the nutrition care process.
Resumo:
Background & aims: One aim of the Australasian Nutrition Care Day Survey was to determine the nutritional status and dietary intake of acute care hospital patients. Methods: Dietitians from 56 hospitals in Australia and New Zealand completed a 24-h survey of nutritional status and dietary intake of adult hospitalised patients. Nutritional risk was evaluated using the Malnutrition Screening Tool. Participants ‘at risk’ underwent nutritional assessment using Subjective Global Assessment. Based on the International Classification of Diseases (Australian modification), participants were also deemed malnourished if their body mass index was <18.5 kg/m2. Dietitians recorded participants’ dietary intake at each main meal and snacks as 0%, 25%, 50%, 75%, or 100% of that offered. Results: 3122 patients (mean age: 64.6 ± 18 years) participated in the study. Forty-one percent of the participants were “at risk” of malnutrition. Overall malnutrition prevalence was 32%. Fifty-five percent of malnourished participants and 35% of well-nourished participants consumed ≤50% of the food during the 24-h audit. “Not hungry” was the most common reason for not consuming everything offered during the audit. Conclusion: Malnutrition and sub-optimal food intake is prevalent in acute care patients across hospitals in Australia and New Zealand and warrants appropriate interventions.
Resumo:
A routine activity for a sports dietitian is to estimate energy and nutrient intake from an athlete's self-reported food intake. Decisions made by the dietitian when coding a food record are a source of variability in the data. The aim of the present study was to determine the variability in estimation of the daily energy and key nutrient intakes of elite athletes, when experienced coders analyzed the same food record using the same database and software package. Seven-day food records from a dietary survey of athletes in the 1996 Australian Olympic team were randomly selected to provide 13 sets of records, each set representing the self-reported food intake of an endurance, team, weight restricted, and sprint/power athlete. Each set was coded by 3-5 members of Sports Dietitians Australia, making a total of 52 athletes, 53 dietitians, and 1456 athlete-days of data. We estimated within- and between- athlete and dietitian variances for each dietary nutrient using mixed modeling, and we combined the variances to express variability as a coefficient of variation (typical variation as a percent of the mean). Variability in the mean of 7-day estimates of a nutrient was 2- to 3-fold less than that of a single day. The variability contributed by the coder was less than the true athlete variability for a 1-day record but was of similar magnitude for a 7-day record. The most variable nutrients (e.g., vitamin C, vitamin A, cholesterol) had approximately 3-fold more variability than least variable nutrients (e.g., energy, carbohydrate, magnesium). These athlete and coder variabilities need to be taken into account in dietary assessment of athletes for counseling and research.
Resumo:
International research on prisoners demonstrates poor health outcomes, including chronic disease, with the overall burden to the community high. Prisoners are predominantly male and young. In Australia, the average incarceration length is 3 years, sufficient to impact long term health, including nutrition. Food in prisons is highly controlled, yet gaps exist in policy. In most Western countries prisons promote healthy foods, often incongruent with prisoner expectations or wants. Few studies have been conducted on dietary intakes during incarceration in relation to food policy. In this study detailed diet histories were collected on 120/945 men (mean age = 32 years), in a high-secure prison. Intakes were verified via individual purchase records, mealtime observations, and audits of food preparation, purchasing and holdings. Physical measurements (including fasting bloods) were taken and medical records reviewed. Results showed the standard food provided consistent with current dietary guidelines, however limited in menu choice. Diet histories revealed self-funded foods contributing 1–63% of energy (mean = 30%), 0–83% sugar (mean = 38%), 1–77% saturated fats (mean = 31%) and 1–59% sodium (mean = 23%). High levels of modification to food provided was found using minimal cooking amenities and inclusion of self-funded foods and/or foods retained from previous meals. Medical records and physical measurements confirmed markers of chronic disease. This study highlights the need to establish clear guidelines on all food available in prisons if chronic disease risk reduction is a goal. This study has also supported evidenced based food and nutrition policy including menu choice, food quality, quantity and safety as well as type and access to self-funded foods.
Resumo:
Diet Induced Thermogenesis (DIT) is the energy expended consequent to meal consumption, and reflects the energy required for the processing and digestion of food consumed throughout each day. Although DIT is the total energy expended across a day in digestive processes to a number of meals, most studies measure thermogenesis in response to a single meal (Meal Induced Thermogenesis: MIT) as a representation of an individual’s thermogenic response to acute food ingestion. As a component of energy expenditure, DIT may have a contributing role in weight gain and weight loss. While the evidence is inconsistent, research has tended to reveal a suppressed MIT response in obese compared to lean individuals, which identifies individuals with an efficient storage of food energy, hence a greater tendency for weight gain. Appetite is another factor regulating body weight through its influence on energy intake. Preliminary research has shown a potential link between MIT and postprandial appetite as both are responses to food ingestion and have a similar response dependent upon the macronutrient content of food. There is a growing interest in understanding how both MIT and appetite are modified with changes in diet, activity levels and body size. However, the findings from MIT research have been highly inconsistent, potentially due to the vastly divergent protocols used for its measurement. Therefore, the main theme of this thesis was firstly, to address some of the methodological issues associated with measuring MIT. Additionally this thesis aimed to measure postprandial appetite simultaneously to MIT to test for any relationships between these meal-induced variables and to assess changes that occur in MIT and postprandial appetite during periods of energy restriction (ER) and following weight loss. Two separate studies were conducted to achieve these aims. Based on the increasing prevalence of obesity, it is important to develop accurate methodologies for measuring the components potentially contributing to its development and to understand the variability within these variables. Therefore, the aim of Study One was to establish a protocol for measuring the thermogenic response to a single test meal (MIT), as a representation of DIT across a day. This was done by determining the reproducibility of MIT with a continuous measurement protocol and determining the effect of measurement duration. The benefit of a fixed resting metabolic rate (RMR), which is a single measure of RMR used to calculate each subsequent measure of MIT, compared to separate baseline RMRs, which are separate measures of RMR measured immediately prior to each MIT test meal to calculate each measure of MIT, was also assessed to determine the method with greater reproducibility. Subsidiary aims were to measure postprandial appetite simultaneously to MIT, to determine its reproducibility between days and to assess potential relationships between these two variables. Ten healthy individuals (5 males, 5 females, age = 30.2 ± 7.6 years, BMI = 22.3 ± 1.9 kg/m2, %Fat Mass = 27.6 ± 5.9%) undertook three testing sessions within a 1-4 week time period. During the first visit, participants had their body composition measured using DXA for descriptive purposes, then had an initial 30-minute measure of RMR to familiarise them with the testing and to be used as a fixed baseline for calculating MIT. During the second and third testing sessions, MIT was measured. Measures of RMR and MIT were undertaken using a metabolic cart with a ventilated hood to measure energy expenditure via indirect calorimetry with participants in a semi-reclined position. The procedure on each MIT test day was: 1) a baseline RMR measured for 30 minutes, 2) a 15-minute break in the measure to consume a standard 576 kcal breakfast (54.3% CHO, 14.3% PRO, 31.4% FAT), comprising muesli, milk toast, butter, jam and juice, and 3) six hours of measuring MIT with two, ten-minute breaks at 3 and 4.5 hours for participants to visit the bathroom. On the MIT test days, pre and post breakfast then at 45-minute intervals, participants rated their subjective appetite, alertness and comfort on visual analogue scales (VAS). Prior to each test, participants were required to be fasted for 12 hours, and have undertaken no high intensity physical activity for the previous 48 hours. Despite no significant group changes in the MIT response between days, individual variability was high with an average between-day CV of 33%, which was not significantly improved by the use of a fixed RMR to 31%. The 95% limits of agreements which ranged from 9.9% of energy intake (%EI) to -10.7%EI with the baseline RMRs and between 9.6%EI to -12.4%EI with the fixed RMR, indicated very large changes relative to the size of the average MIT response (MIT 1: 8.4%EI, 13.3%EI; MIT 2: 8.8%EI, 14.7%EI; baseline and fixed RMRs respectively). After just three hours, the between-day CV with the baseline RMR was 26%, which may indicate an enhanced MIT reproducibility with shorter measurement durations. On average, 76, 89, and 96% of the six-hour MIT response was completed within three, four and five hours, respectively. Strong correlations were found between MIT at each of these time points and the total six-hour MIT (range for correlations r = 0.990 to 0.998; P < 0.01). The reproducibility of the proportion of the six-hour MIT completed at 3, 4 and 5 hours was reproducible (between-day CVs ≤ 8.5%). This indicated the suitability to use shorter durations on repeated occasions and a similar percent of the total response to be completed. There was a lack of strong evidence of any relationship between the magnitude of the MIT response and subjective postprandial appetite. Given a six-hour protocol places a considerable burden on participants, these results suggests that a post-meal measurement period of only three hours is sufficient to produce valid information on the metabolic response to a meal. However while there was no mean change in MIT between test days, individual variability was large. Further research is required to better understand which factors best explain the between-day variability in this physiological measure. With such a high prevalence of obesity, dieting has become a necessity to reduce body weight. However, during periods of ER, metabolic and appetite adaptations can occur which may impede weight loss. Understanding how metabolic and appetite factors change during ER and weight loss is important for designing optimal weight loss protocols. The purpose of Study Two was to measure the changes in the MIT response and subjective postprandial appetite during either continuous (CONT) or intermittent (INT) ER and following post diet energy balance (post-diet EB). Thirty-six obese male participants were randomly assigned to either the CONT (Age = 38.6 ± 7.0 years, weight = 109.8 ± 9.2 kg, % fat mass = 38.2 ± 5.2%) or INT diet groups (Age = 39.1 ± 9.1 years, weight = 107.1 ± 12.5 kg, % fat mass = 39.6 ± 6.8%). The study was divided into three phases: a four-week baseline (BL) phase where participants were provided with a diet to maintain body weight, an ER phase lasting either 16 (CONT) or 30 (INT) weeks, where participants were provided with a diet which supplied 67% of their energy balance requirements to induce weight loss and an eight-week post-diet EB phase, providing a diet to maintain body weight post weight loss. The INT ER phase was delivered as eight, two-week blocks of ER interspersed with two-week blocks designed to achieve weight maintenance. Energy requirements for each phase were predicted based on measured RMR, and adjusted throughout the study to account for changes in RMR. All participants completed MIT and appetite tests during BL and the ER phase. Nine CONT and 15 INT participants completed the post-diet EB MIT and 14 INT and 15 CONT participants completed the post-diet EB appetite tests. The MIT test day protocol was as follows: 1) a baseline RMR measured for 30 minutes, 2) a 15-minute break in the measure to consume a standard breakfast meal (874 kcal, 53.3% CHO, 14.5% PRO, 32.2% FAT), and 3) three hours of measuring MIT. MIT was calculated as the energy expenditure above the pre-meal RMR. Appetite test days were undertaken on a separate day using the same 576 kcal breakfast used in Study One. VAS were used to assess appetite pre and post breakfast, at one hour post breakfast then a further three times at 45-minute intervals. Appetite ratings were calculated for hunger and fullness as both the intra-meal change in appetite and the AUC. The three-hour MIT response at BL, ER and post-diet EB respectively were 5.4 ± 1.4%EI, 5.1 ± 1.3%EI and 5.0 ± 0.8%EI for the CONT group and 4.4 ± 1.0%EI, 4.7 ± 1.0%EI and 4.8 ± 0.8%EI for the INT group. Compared to BL, neither group had significant changes in their MIT response during ER or post-diet EB. There were no significant time by group interactions (p = 0.17) indicating a similar response to ER and post-diet EB in both groups. Contrary to what was hypothesised, there was a significant increase in postprandial AUC fullness in response to ER in both groups (p < 0.05). However, there were no significant changes in any of the other postprandial hunger or fullness variables. Despite no changes in MIT in both the CONT or INT group in response to ER or post-diet EB and only a minor increase in postprandial AUC fullness, the individual changes in MIT and postprandial appetite in response to ER were large. However those with the greatest MIT changes did not have the greatest changes in postprandial appetite. This study shows that postprandial appetite and MIT are unlikely to be altered during ER and are unlikely to hinder weight loss. Additionally, there were no changes in MIT in response to weight loss, indicating that body weight did not influence the magnitude of the MIT response. There were large individual changes in both variables, however further research is required to determine whether these changes were real compensatory changes to ER or simply between-day variation. Overall, the results of this thesis add to the current literature by showing the large variability of continuous MIT measurements, which make it difficult to compare MIT between groups and in response to diet interventions. This thesis was able to provide evidence to suggest that shorter measures may provide equally valid information about the total MIT response and can therefore be utilised in future research in order to reduce the burden of long measurements durations. This thesis indicates that MIT and postprandial subjective appetite are most likely independent of each other. This thesis also shows that, on average, energy restriction was not associated with compensatory changes in MIT and postprandial appetite that would have impeded weight loss. However, the large inter-individual variability supports the need to examine individual responses in more detail.
Resumo:
Three native freshwater crayfish Cherax species are farmed in Australia namely; Redclaw (Cherax quadricarinatus), Marron (C. tenuimanus), and Yabby (C. destructor). Lack of appropriate data on specific nutrient requirements for each of these species, however, has constrained development of specific formulated diets and hence current use of over-formulated feeds or expensive marine shrimp feeds, limit their profitability. A number of studies have investigated nutritional requirements in redclaw that have focused on replacing expensive fish meal in formulated feeds with non-protein, less expensive substitutes including plant based ingredients. Confirmation that freshwater crayfish possess endogenous cellulase genes, suggests their potential ability to utilize complex carbohydrates like cellulose as nutrient sources in their diet. To date, studies have been limited to only C. quadricarinatus and C. destructor and no studies have compared the relative ability of each species to utilize soluble cellulose in their diets. Individual feeding trials of late-juveniles of each species were conducted separately in an automated recirculating culture system over 12 week cycles. Animals were fed either a test diet (TD) that contained 20% soluble cellulose or a reference diet (RD) substituted with the same amount of corn starch. Water temperature, conductivity and pH were maintained at constant and optimum levels for each species. Animals were fed at 3% of their body weight twice daily and wet body weight was recorded bi-weekly. At the end of experiment, all animals were harvested, measured and midgut gland extracts assayed for alpha-amylase, total protease and cellulase activity levels. After the trial period, redclaw fed with RD showed significantly higher (p<0.05) specific growth rate (SGR) compare with animals fed the TD while SGR of marron and yabby fed the two diets were not significantly different (p<0.05). Cellulase expression levels in redclaw were not significantly different between diets. Marron and yabby showed significantly higher cellulase activity when fed the RD. Amylase and protease activity in all three species were significantly higher in the animals fed with RD (Table 1). These results indicate that test animals of all species can utilize starch better than dietary soluble cellulose in their diet and inclusion of 20% soluble cellulose in diets does not appear to have any significant negative effect on their growth rate but survival was impacted in C. quadricarinatus while not in C. tenuimanus or C. destructor.
Resumo:
The current study evaluated the effect of soluble dietary cellulose on growth, survival and digestive enzyme activity in three endemic, Australian freshwater crayfish species (redclaw: Cherax quadricarinatus, marron: C. tenuimanus, yabby: C. destructor). Separate individual feeding trials were conducted for late-stage juveniles from each species in an automated recirculating freshwater, culture system. Animals were fed either a test diet (TD) that contained 20% soluble cellulose or a reference diet (RD) substituted with the same amount of corn starch, over a 12 week period. Redclaw fed with RD showed significantly higher (p<0.05) specific growth rates (SGR) compared with animals fed the TD, while SGR of marron and yabby fed the two diets were not significantly different. Expressed cellulase activity levels in redclaw were not significantly different between diets. Marron and yabby showed significantly higher cellulase activity when fed the RD (p<0.05). Amylase and protease activity in all three species were significantly higher in the animals fed with RD (p<0.05). These results indicate that test animals of all three species appear to utilize starch more efficiently than soluble dietary cellulose in their diet. The inclusion of 20% soluble cellulose in diets did not appear, however, to have a significant negative effect on growth rates.
Resumo:
BACKGROUND/OBJECTIVE: To investigate the extent of baseline psychosocial characterisation of subjects in published dietary randomised controlled trials (RCTs) for weight loss. SUBJECTS/METHODS: Systematic review of adequately sized (nX10) RCTs comprising X1 diet-alone arm for weight loss were included for this systematic review. More specifically, trials included overweight (body mass index 425 kg/m2) adults, were of duration X8 weeks and had body weight as the primary outcome. Exclusion criteria included specific psychological intervention (for example, Cognitive Behaviour Therapy (CBT)), use of web-based tools, use of supplements, liquid diets, replacement meals and very-low calorie diets. Physical activity intervention was restricted to general exercise only (not supervised or prescribed, for example, VO2 maximum level). RESULTS: Of 176 weight-loss RCTs published during 2008–2010, 15 met selection criteria and were assessed for reported psychological characterisation of subjects. All studies reported standard characterisation of clinical and biochemical characteristics of subjects. Eleven studies reported no psychological attributes of subjects (three of these did exclude those taking psychoactive medication). Three studies collected data on particular aspects of psychology related to specific research objectives (figure scale rating, satiety and quality-of-life). Only one study provided a comprehensive background on psychological attributes of subjects. CONCLUSION: Better characterisation in behaviour-change interventions will reduce potential confounding and enhance generalisability of such studies.
Resumo:
Essential hypertension is a highly hereditable disorder in which genetic influences predominate over environmental factors. The molecular genetic profiles which predispose to essential hypertension are not known. In rats with genetic hypertension, there is some recent evidence pointing to linkage of renin gene alleles with blood pressure. The genes for renin and antithrombin III belong to a conserved synteny group which, in humans, spans the q21.3-32.3 region of chromosome I and, in rats, is linkage group X on chromosome 13. The present study examined the association of particular human renin gene (REN) and antithrombin III gene (AT3) polymorphisms with essential hypertension by comparing the frequency of specific alleles for each of these genes in 50 hypertensive offspring of hypertensive parents and 91 normotensive offspring of normotensive parents. In addition, linkage relationships were examined in hypertensive pedigrees with multiple affected individuals. Alleles of a REN HindIII restriction fragment length polymorphism (RFLP) were detected using a genomic clone, λHR5, to probe Southern blots of HindIII-cut leucocyte DNA, and those for an AT3 Pstl RFLP were detected by phATIII 113 complementary DNA probe. The frequencies of each REN allele in the hypertensive group were 0.76 and 0.24 compared with 0.74 and 0.26 in the normotensive group. For AT3, hypertensive allele frequencies were 0.49 and 0.51 compared with normotensive values of 0.54 and 0.46. These differences were not significant by χ2 analysis (P > 0.2). Linkage analysis of a family (data from 16 family members, 10 of whom were hypertensive), informative for both markers, without an age-of-onset correction, and assuming dominant inheritance of hypertension, complete penetrance and a disease frequency of 20%, did not indicate linkage of REN with hypertension, but gave a positive, although not significant, logarithm of the odds for linkage score of 0.784 at a recombination fraction of 0 for AT3 linkage to hypertension. In conclusion, the present study could find no evidence for an association of a REN HindIII RFLP with essential hypertension or for a linkage of the locus defined by this RFLP in a family segregating for hypertension. In the case of an AT3 Pstl RFLP, although association analysis was negative, linkage analysis suggested possible involvement (odds of 6:1 in favour) of a gene located near the 1q23 locus with hypertension in one informative family.
Resumo:
Objectives To examine the effects on monotonous driving of normal sleep versus one night of sleep restriction in continuous positive airway pressure (CPAP) treated obstructive sleep apnoea (OSA) patients compared with age matched healthy controls. Methods Nineteen CPAP treated compliant male OSA patients (OSA-treated patients (OPs)), aged 50–75 years, and 20 healthy age-matched controls underwent both a normal night’s sleep and sleep restriction to 5 h (OPs remained on CPAP) in a counterbalanced design. All participants completed a 2 h afternoon monotonous drive in a realistic car simulator. Driving was monitored for sleepiness-related minor and major lane deviations, with ‘safe’ driving time being total time driven prior to first major lane deviation. EEGs were recorded continuously, and subjective sleepiness ratings were taken at regular intervals throughout the drive. Results After a normal night’s sleep, OPs and controls did not differ in terms of driving performance or in their ability to assess the levels of their own sleepiness, with both groups driving ‘safely’ for approximately 90 min. However, after sleep restriction, OPs had a significantly shorter (65 min) safe driving time and had to apply more compensatory effort to maintain their alertness compared with controls. They also underestimated the enhanced sleepiness. Nevertheless, apart from this caveat, there were generally close associations between subjective sleepiness, likelihood of a major lane deviation and EEG changes indicative of sleepiness. Conclusions With a normal night’s sleep, effectively treated older men with OSA drive as safely as healthy men of the same age. However, after restricted sleep, driving impairment is worse than that of controls. This suggests that, although successful CPAP treatment can alleviate potential detrimental effects of OSA on monotonous driving following normal sleep, these patients remain more vulnerable to sleep restriction.
Resumo:
Background and aims The Australasian Nutrition Care Day Survey (ANCDS) reported two-in-five patients consume ≤50% of the offered food in Australian and New Zealand hospitals. After controlling for confounders (nutritional status, age, disease type and severity), the ANCDS also established an independent association between poor food intake and increased in-hospital mortality. This study aimed to evaluate if medical nutrition therapy (MNT) could improve dietary intake in hospital patients eating poorly. Methods An exploratory pilot study was conducted in the respiratory, neurology and orthopaedic wards of an Australian hospital. At baseline, percentage food intake (0%, 25%, 50%, 75%, and 100%) was evaluated for each main meal and snack for a 24-hour period in patients hospitalised for ≥2 days and not under dietetic review. Patients consuming ≤50% of offered meals due to nutrition-impact symptoms were referred to ward dietitians for MNT. Food intake was re-evaluated on the seventh day following recruitment (post-MNT). Results 184 patients were observed over four weeks; 32 patients were referred for MNT. Although baseline and post-MNT data for 20 participants (68±17years, 65% females) indicated a significant increase in median energy and protein intake post-MNT (3600kJ/day, 40g/day) versus baseline (2250kJ/day, 25g/day) (p<0.05), the increased intake met only 50% of dietary requirements. Persistent nutrition impact symptoms affected intake. Conclusion In this pilot study whilst dietary intake improved, it remained inadequate to meet participants’ estimated requirements due to ongoing nutrition-impact symptoms. Appropriate medical management and early enteral feeding could be a possible solution for such patients.
Resumo:
Background & aims Depression has a complex association with cardiometabolic risk, both directly as an independent factor and indirectly through mediating effects on other risk factors such as BMI, diet, physical activity, and smoking. Since changes to many cardiometabolic risk factors involve behaviour change, the rise in depression prevalence as a major global health issue may present further challenges to long-term behaviour change to reduce such risk. This study investigated associations between depression scores and participation in a community-based weight management intervention trial. Methods A group of 64 overweight (BMI > 27), otherwise healthy adults, were recruited and randomised to follow either their usual diet, or an isocaloric diet in which saturated fat was replaced with monounsaturated fat (MUFA), to a target of 50% total fat, by adding macadamia nuts to the diet. Subjects were assessed for depressive symptoms at baseline and at ten weeks using the Beck Depression Inventory (BDI-II). Both control and intervention groups received advice on National Guidelines for Physical Activity and adhered to the same protocol for food diary completion and trial consultations. Anthropometric and clinical measurements (cholesterol, inflammatory mediators) also were taken at baseline and 10 weeks. Results During the recruitment phase, pre-existing diagnosed major depression was one of a range of reasons for initial exclusion of volunteers from the trial. Amongst enrolled participants, there was a significant correlation (R = −0.38, p < 0.05) between BDI-II scores at baseline and duration of participation in the trial. Subjects with a baseline BDI ≥10 (moderate to severe depression symptoms) were more likely to dropout of the trial before week 10 (p < 0.001). BDI-II scores in the intervention (MUFA) diet group decreased, but increased in the control group over the 10-week period. Univariate analysis of variance confirmed these observations (adjusted R2 = 0.257, p = 0.01). Body weight remained static over the 10-week period in the intervention group, corresponding to a relative increase in the control group (adjusted R2 = 0.097, p = 0.064). Conclusions Depression symptoms have the potential to affect enrolment in and adherence to dietbased risk reduction interventions, and may consequently influence the generalisability of such trials. Depression scores may therefore be useful for characterising, screening and allocating subjects to appropriate treatment pathways.
Resumo:
Background & Aims: Access to sufficient amounts of safe and culturally-acceptable foods is a fundamental human right. Food security exists when all people, at all times, have physical, social, and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life. Food insecurity therefore occurs when the availability or access to sufficient amounts of nutritionally-adequate, culturally-appropriate and safe foods, or, the ability to acquire such foods in socially-acceptable ways, is limited. Food insecurity may result in significant adverse effects for the individual and these outcomes may vary between adults and children. Among adults, food insecurity may be associated with overweight or obesity, poorer self-rated general health, depression, increased health-care utilisation and dietary intakes less consistent with national recommendations. Among children, food insecurity may result in poorer self or parent-reported general health, behavioural problems, lower levels of academic achievement and poor social outcomes. The majority of research investigating the potential correlates of food insecurity has been undertaken in the United States (US), where regular national screening for food insecurity is undertaken using a comprehensive multi-item measurement. In Australia, screening for food insecurity takes place on a three yearly basis via the use of a crude, single-item included in the National Health Survey (NHS). This measure has been shown to underestimate the prevalence of food insecurity by 5%. From 1995 – 2004, the prevalence of food insecurity among the Australian population remained stable at 5%. Due to the perceived low prevalence of this issue, screening for food insecurity was not undertaken in the most recent NHS. Furthermore, there are few Australian studies investigating the potential determinants of food insecurity and none investigating potential outcomes among adults and children. This study aimed to examine these issues by a) investigating the prevalence of food insecurity among households residing in disadvantaged urban areas and comparing prevalence rates estimated by the more comprehensive 18-item and 6-item United States Department of Agriculture (USDA) Food Security Survey Module (FSSM) to those estimated by the current single-item measure used for surveillance in Australia and b) investigating the potential determinants and outcomes of food insecurity, Methods: A comprehensive literature review was undertaken to investigate the potential determinants and consequences of food insecurity among developed countries. This was followed by a cross-sectional study in which 1000 households from the most disadvantaged 5% of Brisbane areas were sampled and data collected via mail-based survey (final response rate = 53%, n = 505). Data were collected for food security status, sociodemographic characteristics (household income, education, age, gender, employment status, housing tenure and living arrangements), fruit and vegetable intakes, meat and take-away consumption, presence of depressive symptoms, presence of chronic disease and body mass index (BMI) among adults. Among children, data pertaining to BMI, parent-reported general health, days away from school and activities and behavioural problems were collected. Rasch analysis was used to investigate the psychometric properties of the 18-, 10- and 6-item adaptations of the USDA-FSSM, and McNemar's test was used to investigate the difference in the prevalence of food insecurity as measured by these three adaptations compared to the current single-item measure used in Australia. Chi square and logistic regression were used to investigate the differences in dietary and health outcomes among adults and health and behavioural outcomes among children. Results were adjusted for equivalised household income and, where necessary, for indigenous status, education and family type. Results: Overall, 25% of households in these urbanised-disadvantaged areas reported experiencing food insecurity; this increased to 34% when only households with children were analysed. The current reliance on a single-item measure to screen for food insecurity may underestimate the true burden among the Australian population, as this measure was shown to significantly underestimate the prevalence of food insecurity by five percentage points. Internationally, major potential determinants of food insecurity included poverty and indicators of poverty, such as low-income, unemployment and lower levels of education. Ethnicity, age, transportation and cooking and financial skills were also found to be potential determinants of food insecurity. Among Australian adults in disadvantaged urban areas, food insecurity was associated with a three-fold increase in experiencing poorer self-rated general health and a two-to-five-fold increase in the risk of depression. Furthermore, adults from food insecure households were twoto- three times more likely to have seen a general practitioner and/or been admitted to hospital within the previous six months, compared to their food secure counterparts. Weight status and intakes of fruits, vegetables and meat were not associated with food insecurity. Among Australian households with children, those in the lowest tertile were over 16 times more likely to experience food insecurity compared to those in the highest tertile for income. After adjustment for equivalised household income, children from food insecure households were three times more likely to have missed days away from school or other activities. Furthermore, children from food insecure households displayed a two-fold increase in atypical emotions and behavioural difficulties. Conclusions: Food insecurity is an important public health issue and may contribute to the burden on the health care system through its associations with depression and increased health care utilisation among adults and behavioural and emotional problems among children. Current efforts to monitor food insecurity in Australia do not occur frequently and use a tool that may underestimate the prevalence of food insecurity. Efforts should be made to improve the regularity of screening for food insecurity via the use of a more accurate screening measure. Most of the current strategies that aim to alleviate food insecurity do not sufficiently address the issue of insufficient financial resources for acquiring food; a factor which is an important determinant of food insecurity. Programs to address this issue should be developed in collaboration with groups at higher risk of developing food insecurity and should incorporate strategies to address the issue of low income as a barrier to food acquisition.
Resumo:
Traditionally, infectious diseases and under-nutrition have been considered major health problems in Sri Lanka with little attention paid to obesity and associated non-communicable diseases (NCDs). However, the recent Sri Lanka Diabetes and Cardiovascular Study (SLDCS) reported the epidemic level of obesity, diabetes and metabolic syndrome. Moreover, obesity-associated NCDs is the leading cause of death in Sri Lanka and there is an exponential increase in hospitalization due to NCDs adversely affecting the development of the country. Despite Sri Lanka having a very high prevalence of NCDs and associated mortality, little is known about the causative factors for this burden. It is widely believed that the global NCD epidemic is associated with recent lifestyle changes, especially dietary factors. In the absence of sufficient data on dietary habits in Sri Lanka, successful interventions to manage these serious health issues would not be possible. In view of the current situation the dietary survey was undertaken to assess the intakes of energy, macro-nutrients and selected other nutrients with respect to socio demographic characteristics and the nutritional status of Sri Lankan adults especially focusing on obesity. Another aim of this study was to develop and validate a culturally specific food frequency questionnaire (FFQ) to assess dietary risk factors of NCDs in Sri Lankan adults. Data were collected from a subset of the national SLDCS using a multi-stage, stratified, random sampling procedure (n=500). However, data collection in the SLDCS was affected by the prevailing civil war which resulted in no data being collected from Northern and Eastern provinces. To obtain a nationally representative sample, additional subjects (n=100) were later recruited from the two provinces using similar selection criteria. Ethical Approval for this study was obtained from the Ethical Review Committee, Faculty of Medicine, University of Colombo, Sri Lanka and informed consent was obtained from the subjects before data were collected. Dietary data were obtained using the 24-h Dietary Recall (24HDR) method. Subjects were asked to recall all foods and beverages, consumed over the previous 24-hour period. Respondents were probed for the types of foods and food preparation methods. For the FFQ validation study, a 7-day weight diet record (7-d WDR) was used as the reference method. All foods recorded in the 24 HDR were converted into grams and then intake of energy and nutrients were analysed using NutriSurvey 2007 (EBISpro, Germany) which was modified for Sri Lankan food recipes. Socio-demographic details and body weight perception were collected from interviewer-administrated questionnaire. BMI was calculated and overweight (BMI ≥23 kg.m-2), obesity (BMI ≥25 kg.m-2) and abdominal obesity (Men: WC ≥ 90 cm; Women: WC ≥ 80 cm) were categorized according to Asia-pacific anthropometric cut-offs. The SPSS v. 16 for Windows and Minitab v10 were used for statistical analysis purposes. From a total of 600 eligible subjects, 491 (81.8%) participated of whom 34.5% (n=169) were males. Subjects were well distributed among different socio-economic parameters. A total of 312 different food items were recorded and nutritionists grouped similar food items which resulted in a total of 178 items. After performing step-wise multiple regression, 93 foods explained 90% of the variance for total energy intake, carbohydrates, protein, total fat and dietary fibre. Finally, 90 food items and 12 photographs were selected. Seventy-seven subjects completed (response rate = 65%) the FFQ and 7-day WDR. Estimated mean energy intake (SD) from FFQ (1794±398 kcal) and 7DWR (1698±333 kcal, P<0.001) was significantly different due to a significant overestimation of carbohydrate (~10 g/d, P<0.001) and to some extent fat (~5 g/d, NS). Significant positive correlations were found between the FFQ and 7DWR for energy (r = 0.39), carbohydrate (r = 0.47), protein (r = 0.26), fat (r =0.17) and dietary fiber (r = 0.32). Bland-Altman graphs indicated fairly good agreement between methods with no relationship between bias and average intake of each nutrient examined. The findings from the nutrition survey showed on average, Sri Lankan adults consumed over 14 portions of starch/d; moreover, males consumed 5 more portions of cereal than females. Sri Lankan adults consumed on average 3.56 portions of added sugars/d. Moreover, mean daily intake of fruit (0.43) and vegetable (1.73) portions was well below minimum dietary recommendations (fruits 2 portions/d; vegetables 3 portions/d). The total fruit and vegetable intake was 2.16 portions/d. Daily consumption of meat or alternatives was 1.75 portions and the sum of meat and pulses was 2.78 portions/d. Starchy foods were consumed by all participants and over 88% met the minimum daily recommendations. Importantly, nearly 70% of adults exceeded the maximum daily recommendation for starch (11portions/d) and a considerable proportion consumed larger numbers of starch servings daily, particularly men. More than 12% of men consumed over 25 starch servings/d. In contrast to their starch consumption, participants reported very low intakes of other food groups. Only 11.6%, 2.1% and 3.5% of adults consumed the minimum daily recommended servings of vegetables, fruits, and fruits and vegetables combined, respectively. Six out of ten adult Sri Lankans sampled did not consume any fruits. Milk and dairy consumption was extremely low; over a third of the population did not consume any dairy products and less than 1% of adults consumed 2 portions of dairy/d. A quarter of Sri Lankans did not report consumption of meat and pulses. Regarding protein consumption, 36.2% attained the minimum Sri Lankan recommendation for protein; and significantly more men than women achieved the recommendation of ≥3 servings of meat or alternatives daily (men 42.6%, women 32.8%; P<0.05). Over 70% of energy was derived from carbohydrates (Male:72.8±6.4%, Female:73.9±6.7%), followed by fat (Male:19.9±6.1%, Female:18.5±5.7%) and proteins (Male:10.6±2.1%, Female:10.9±5.6%). The average intake of dietary fiber was 21.3 g/day and 16.3 g/day for males and females, respectively. There was a significant difference in nutritional intake related to ethnicities, areas of residence, education levels and BMI categories. Similarly, dietary diversity was significantly associated with several socio-economic parameters among Sri Lankan adults. Adults with BMI ≥25 kg.m-2 and abdominally obese Sri Lankan adults had the highest diet diversity values. Age-adjusted prevalence (95% confidence interval) of overweight, obesity, and abdominal obesity among Sri Lankan adults were 17.1% (13.8-20.7), 28.8% (24.8-33.1), and 30.8% (26.8-35.2), respectively. Men, compared with women, were less overweight, 14.2% (9.4-20.5) versus 18.5% (14.4-23.3), P = 0.03, less obese, 21.0% (14.9-27.7) versus 32.7% (27.6-38.2), P < .05; and less abdominally obese, 11.9% (7.4-17.8) versus 40.6% (35.1-46.2), P < .05. Although, prevalence of obesity has reached to epidemic level body weight misperception was common among Sri Lankan adults. Two-thirds of overweight males and 44.7% of females considered themselves as in "about right weight". Over one third of both male and female obese subjects perceived themselves as "about right weight" or "underweight". Nearly 32% of centrally obese men and women perceived that their waist circumference is about right. People who perceived overweight or very overweight (n = 154) only 63.6% tried to lose their body weight (n = 98), and quarter of adults seek advices from professionals (n = 39). A number of important conclusions can be drawn from this research project. Firstly, the newly developed FFQ is an acceptable tool for assessing the nutrient intake of Sri Lankans and will assist proper categorization of individuals by dietary exposure. Secondly, a substantial proportion of the Sri Lankan population does not consume a varied and balanced diet, which is suggestive of a close association between the nutrition-related NCDs in the country and unhealthy eating habits. Moreover, dietary diversity is positively associated with several socio-demographic characteristics and obesity among Sri Lankan adults. Lastly, although obesity is a major health issue among Sri Lankan adults, body weight misperception was common among underweight, healthy weight, overweight, and obese adults in Sri Lanka. Over 2/3 of overweight and 1/3 of obese Sri Lankan adults believe that they are in "right weight" or "under-weight" categories.
Resumo:
Objective: To evaluate responses to self-administered brief questions regarding consumption of vegetables and fruit by comparison with blood levels of serum carotenoids and red-cell folate. Design: A cross-sectional study in which participants reported their usual intake of fruit and vegetables in servings per day, and serum levels of five carotenoids (α-carotene, β-carotene, β-cryptoxanthin, lutein/zeaxanthin and lycopene) and red-cell folate were measured. Serum carotenoid levels were determined by high-performance liquid chromatography, and red-cell folate by an automated immunoassay system. Settings and subjects: Between October and December 2000, a sample of 1598 adults aged 25 years and over, from six randomly selected urban centres in Queensland, Australia, were examined as part of a national study conducted to determine the prevalence of diabetes and associated cardiovascular risk factors. Results: Statistically significant (P<0.01) associations with vegetable and fruit intake (categorised into groups: ≤1 serving, 2–3 servings and ≥4 servings per day) were observed for α-carotene, β-carotene, β-cryptoxanthin, lutein/zeaxanthin and red-cell folate. The mean level of these carotenoids and of red-cell folate increased with increasing frequency of reported servings of vegetables and fruit, both before and after adjusting for potential confounding factors. A significant association with lycopene was observed only for vegetable intake before adjusting for confounders. Conclusions: These data indicate that brief questions may be a simple and valuable tool for monitoring vegetable and fruit intake in this population.