17 resultados para Food and Nutrition Information and Educational Materials Center (U.S.)
em DigitalCommons@The Texas Medical Center
Resumo:
The relative influence of race, income, education, and Food Stamp Program participation/nonparticipation on the food and nutrient intake of 102 fecund women ages 18-45 years in a Florida urban clinic population was assessed using the technique of multiple regression analysis. Study subgroups were defined by race and Food Stamp Program participation status. Education was found to have the greatest influence on food and nutrient intake. Race was the next most influential factor followed in order by Food Stamp Program participation and income. The combined effect of the four independent variables explained no more than 19 percent of the variance for any of the food and nutrient intake variables. This would indicate that a more complex model of influences is needed if variations in food and nutrient intake are to be fully explained.^ A socioeconomic questionnaire was administered to investigate other factors of influence. The influence of the mother, frequency and type of restaurant dining, and perceptions of food intake and weight were found to be factors deserving further study.^ Dietary data were collected using the 24-hour recall and food frequency checklist. Descriptive dietary findings indicated that iron and calcium were nutrients where adequacy was of concern for all study subgroups. White Food Stamp Program participants had the greatest number of mean nutrient intake values falling below the 1980 Recommended Dietary Allowances (RDAs). When Food Stamp Program participants were contrasted to nonparticipants, mean intakes of six nutrients (kilocalories, calcium, iron, vitamin A, thiamin, and riboflavin) were below the 1980 RDA compared to five mean nutrient intakes (kilocalories, calcium, iron, thiamin and riboflavin) for the nonparticipants. Use of the Index of Nutritional Quality (INQ), however, revealed that the quality of the diet of Food Stamp Program participants per 1000 kilocalories was adequate with exception of calcium and iron. Intakes of these nutrients were also not adequate on a 1000 kilocalorie basis for the nonparticipant group. When mean nutrient intakes of the groups were compared using Student's t-test oleicacid intake was the only significant difference found. Being a nonparticipant in the Food Stamp Program was found to be associated with more frequent consumption of cookies, sweet rolls, doughnuts, and honey. The findings of this study contradict the negative image of the Food Stamp Program participant and emphasize the importance of education. ^
Resumo:
Background: Obesity is a major health problem in the United States that has reached epidemic proportions. With most U.S adults spending the majority of their waking hours at work, the influence of the workplace environment on obesity is gaining in importance. Recent research implicates worksites as providing an 'obesogenic' environment as they encourage overeating and reduce the opportunity for physical activity. Objective: The aim of this study is to describe the nutrition and physical activity environment of Texas Medical Center (TMC) hospitals participating in the Shape Up Houston evaluation study to develop a scoring system to quantify the environmental data collected using the Environmental Assessment Tool (EAT) survey and to assess the inter-observer reliability of using the EAT survey. Methods: A survey instrument that was adapted from the Environmental Assessment Tool (EAT) developed by Dejoy DM et al in 2008 to measure the hospital environmental support for nutrition and physical activity was used for this study. The inter-observer reliability of using the EAT survey was measured and total percent agreement scores were computed. Most responses on the EAT survey are dichotomous (Yes and No) and these responses were coded with a '0' for a 'no' response and a '1' for a 'yes' response. A summative scoring system was developed to quantify these responses. Each hospital was given a score for each scale and subscale on the EAT survey in addition to a total score. All analyses were conducted using Stata 11 software. Results: High inter-observer reliability is observed using EAT. The percentage agreement scores ranged from 94.4%–100%. Only 2 of the 5 hospitals had a fitness facility onsite and scores for exercise programs and outdoor facilities available for hospital employees ranged from 0–62% and 0–37.5%, respectively. The healthy eating percentage for hospital cafeterias range from 42%–92% across the different hospitals while the healthy vending scores were 0%–40%. The total TMC 'healthy hospital' score was 49%. Conclusion: The EAT survey is a reliable instrument for measuring the physical activity and nutrition support environment of hospital worksites. The study results showed a large variability among the TMC hospitals in the existing physical activity and nutrition support environment. This study proposes cost effective policy changes that can increase environmental support to healthy eating and active living among TMC hospital employees.^
Resumo:
Household food insecurity is associated with threats to children’s intellectual, behavioral, and psycho-emotional development. In addition to poor food quality and quantity, the stress associated with food insecurity can undermine caregiver mental health and family functioning. Evidence demonstrates that national assistance programs and policies are needed to ensure that families and children have access to adequate sources of healthy food and to stress-alleviating resources.
Resumo:
Objectives. The purpose of this paper is to conduct a literature review of research relating to foodborne illness, food inspection policy, and restaurants in the United States. Aim 1: To convey the public health importance of studying restaurant food inspection policies and suggest that more research is needed in this field, Aim 2: To conduct a systematic literature review of recent literature pertaining to this subject such that future researchers can understand the: (1) Public perception and expectations of restaurant food inspection policies; (2) Arguments in favor of a grade card policy; and, conversely; (3) Reasons why inspection policies may not work. ^ Data/methods. This paper utilizes a systematic review format to review articles relating to food inspections and restaurants in the U.S. Eight articles were reviewed. ^ Results. The resulting data from the literature provides no conclusive answer as to how, when, and in what method inspection policies should be carried out. The authors do, however, put forward varying solutions as to how to fix the problem of foodborne illness outbreaks in restaurants. These solutions include the implementation of grade cards in restaurants and, conversely, a complete overhaul of the inspection policy system.^ Discussion. The literature on foodborne disease, food inspection policy, and restaurants in the U.S. is limited and varied. But, from the research that is available, we can see that two schools of thought exist. The first of these calls for the implementation of a grade card system, while the second proposes a reassessment and possible overhaul of the food inspection policy system. It is still unclear which of these methods would best slow the increase in foodborne disease transmission in the U.S.^ Conclusion. In order to arrive at solutions to the problem of foodborne disease transmission as it relates to restaurants in this country, we may need to look at literature from other countries and, subsequently, begin incremental changes in the way inspection policies are developed and enforced.^
Resumo:
Blood cholesterol and blood pressure development in childhood and adolescence have important impact on the future adult level of cholesterol and blood pressure, and on increased risk of cardiovascular diseases. The U.S. has higher mortality rates of coronary heart diseases than Japan. A longitudinal comparison in children of risk factor development in the two countries provides more understanding about the causes of cardiovascular disease and its prevention. Such comparisons have not been reported in the past. ^ In Project HeartBeat!, 506 non-Hispanic white, 136 black and 369 Japanese children participated in the study in the U.S. and Japan from 1991 to 1995. A synthetic cohort of ages 8 to 18 years was composed by three cohorts with starting ages at 8, 11, and 14. A multilevel regression model was used for data analysis. ^ The study revealed that the Japanese children had significantly higher slopes of mean total cholesterol (TC) and high density lipoprotein (HDL) cholesterol levels than the U.S. children after adjusting for age and sex. The mean TC level of Japanese children was not significantly different from white and black children. The mean HDL level of Japanese children was significantly higher than white and black children after adjusting for age and sex. The ratio of HDL/TC in Japanese children was significantly higher than in U.S. whites, but not significantly different from the black children. The Japanese group had significantly lower mean diastolic blood pressure phase IV (DBP4) and phase V (DBP5) than the two U.S. groups. The Japanese group also showed significantly higher slopes in systolic blood pressure, DBP5 and DBP4 during the study period than both U.S. groups. The differences were independent from height and body mass index. ^ The study provided the first longitudinal comparison of blood cholesterol and blood pressure between the U.S. and Japanese children and adolescents. It revealed the dynamic process of these factors in the three ethnic groups. ^
Resumo:
Very few studies have described MUP-1 concentrations and measured prevalence of Laboratory Animal Allergy (LAA) at such a diverse institution as the private medical school (MS) that is the focus of this study. Air sampling was performed in three dissimilar animal research facilities at MS and quantitated using a commercially available ELISA. Descriptive data was obtained from an anonymous laboratory animal allergy survey given to both animal facility employees and the researchers who utilize these facilities alike. Logistic regression analysis was then implemented to investigate specific factors that may be predictive of developing LAA as well as factors influencing the reporting of LAA symptoms to the occupational health program. Concentrations of MUP-1 detected ranged from below detectable levels (BDL) to a peak of 22.64 ng/m3 . Overall, 68 employees with symptoms claimed they improved while away from work and only 25 employees reported their symptoms to occupational health. Being Vietnamese, a smoker, not wearing a mask, and working in any facility longer than one year were all significant predictors of having LAA symptoms. This study suggests a LAA monitoring system that relies on self-reporting can be inadequate in estimating LAA problems. In addition, efforts need to be made to target training and educational materials for non-native English speaking employees to overcome language and cultural barriers and address their specific needs. ^
Resumo:
Objectives. To investigate procedural gender equity by assessing predisposing, enabling and need predictors of gender differences in annual medical expenditures and utilization among hypertensive individuals in the U.S. Also, to estimate and compare lifetime medical expenditures among hypertensive men and women in the U.S. ^ Data source. 2001-2004 the Medical Expenditure Panel Survey (MEPS);1986-2000 National Health Interview Survey (NHIS) and National Health Interview Survey linked to mortality in the National Death Index through 2002 (2002 NHIS-NDI). ^ Study design. We estimated total medical expenditure using four equations regression model, specific medical expenditures using two equations regression model and utilization using negative binomial regression model. Procedural equity was assessed by applying the Aday et al. theoretical framework. Expenditures were estimated in 2004 dollars. We estimated hypertension-attributable medical expenditure and utilization among men and women. ^ To estimate lifetime expenditures from ages 20 to 85+, we estimated medical expenditures with cross-sectional data and survival with prospective data. The four equations regression model were used to estimate average annual medical expenditures defined as sum of inpatient stay, emergency room visits, outpatient visits, office based visits, and prescription drugs expenditures. Life tables were used to estimate the distribution of life time medical expenditures for hypertensive men and women at different age and factors such as disease incidence, medical technology and health care cost were assumed to be fixed. Both total and hypertension attributable expenditures among men and women were estimated. ^ Data collection. We used the 2001-2004 MEPS household component and medical condition files; the NHIS person and condition files from 1986-1996 and 1997-2000 sample adult files were used; and the 1986-2000 NHIS that were linked to mortality in the 2002 NHIS-NDI. ^ Principal findings. Hypertensive men had significantly less utilization for most measures after controlling predisposing, enabling and need factors than hypertensive women. Similarly, hypertensive men had less prescription drug (-9.3%), office based (-7.2%) and total medical (-4.5%) expenditures than hypertensive women. However, men had more hypertension-attributable medical expenditures and utilization than women. ^ Expected total lifetime expenditure for average life table individuals at age 20, was $188,300 for hypertensive men and $254,910 for hypertensive women. But the lifetime expenditure that could be attributed to hypertension was $88,033 for men and $40,960 for women. ^ Conclusion. Hypertensive women had more utilization and expenditure for most measures than hypertensive men, possibly indicating procedural inequity. However, relatively higher hypertension-attributable health care of men shows more utilization of resources to treat hypertension related diseases among men than women. Similar results were reported in lifetime analyses.^ Key words: gender, medical expenditures, utilization, hypertension-attributable, lifetime expenditure ^
Resumo:
Published reports have consistently indicated high prevalence of serologic markers for hepatitis B (HBV) and hepatitis C (HCV) infection in U.S. incarcerated populations. Quantifying the current and projected burden of HBV and HCV infection and hepatitis-related sequelae in correctional healthcare systems with even modest precision remains elusive, however, because the prevalence and sequelae of HBV and HCV in U.S. incarcerated populations are not well-studied. This dissertation contributes to the assessment of the burden of HBV and HCV infections in U.S. incarcerated populations by addressing some of the deficiencies and gaps in previous research. ^ Objectives of the three dissertation studies were: (1) To investigate selected study-level factors as potential sources of heterogeneity in published HBV seroprevalence estimates in U.S. adult incarcerated populations (1975-2005), using meta-regression techniques; (2) To quantify the potential influence of suboptimal sensitivity of screening tests for antibodies to hepatitis C virus (anti-HCV) on previously reported anti-HCV prevalence estimates in U.S. incarcerated populations (1990-2005), by comparing these estimates to error-adjusted anti-HCV prevalence estimates in these populations; (3) To estimate death rates due to HBV, HCV, chronic liver disease (CLD/cirrhosis), and liver cancer from 1984 through 2003 in male prisoners in custody of the Texas Department of Criminal Justice (TDCJ) and to quantify the proportion of CLD/cirrhosis and liver cancer prisoner deaths attributable to HBV and/or HCV. ^ Results were as follows. Although meta-regression analyses were limited by the small body of literature, mean population age and serum collection year appeared to be sources of heterogeneity, respectively, in prevalence estimates of antibodies to HBV antigen (HBsAg+) and any positive HBV marker. Other population characteristics and study methods could not be ruled out as sources of heterogeneity. Anti-HCV prevalence is likely somewhat higher in male and female U.S. incarcerated populations than previously estimated in studies using anti-HCV screening tests alone without the benefit of repeat or additional testing. Death rates due to HBV, HCV, CLD/cirrhosis, and liver cancer from 1984 through 2003 in TDCJ male prisoners exceeded state and national rates. HCV rates appeared to be increasing and disproportionately affecting Hispanics. HCV was implicated in nearly one-third of liver cancer deaths. ^
Resumo:
This study investigates the association between race/ethnicity and acculturation variables (language preference and nativity) with use of contraception and contraceptive services among Mexican/Mexican American and “other” Hispanic women aged 15-44 when compared to non- Hispanic white women.^ Data was analyzed from the 2006-2008 National Survey of Family Growth. The sample contained 3357 women aged 15-44. Multivariate logistic regression analysis was used to examine the association between race/ethnicity and acculturation variables and contraceptive-related behaviors adjusted for other known covariates. ^ After multivariate analysis, neither nativity nor language preference were significantly associated with contraception use or contraceptive services. Mexican/Mexican American women did not differ in their contraception-related behaviors when compared to non-Hispanic whites. Other Hispanic women, however, were less likely to obtain contraceptive services than non-Hispanic whites (OR=0.67, 95% CI=0.45-1.00). Women aged 30-39 and 40-44 were less likely to obtain contraception and contraceptive services than those aged 15-19. Single women were less likely to use contraception (OR=0.72, 95% CI=0.56-0.92) and contraceptive services (OR=0.69, 95% CI=0.53-0.89) than married/co-habiting women. Women with healthcare coverage were more likely to use contraception and contraceptive services than uninsured women.^ Among Hispanic women of different origin groups, age, marital status, and healthcare coverage were stronger indicators of contraception-related behavior than race/ethnicity, language preference, and nativity. Reproductive health programs that target increased use of contraception and contraceptive services among Hispanic origin groups should specifically target women who are over 30, single, and uninsured.^
Resumo:
Recent data have shown that the percentage of time spent preparing food has decreased during the past few years, and little information is know about how much time people spend grocery shopping. Food that is pre-prepared is often higher in calories and fat compared to foods prepared at home from scratch. It has been suggested that, because of the higher energy and total fat levels, increased consumption of pre-prepared foods compared to home-cooked meals can lead to weight gain, which in turn can lead to obesity. Nevertheless, to date no study has examined this relationship. The purpose of this study is to determine (i) the association between adult body mass index (BMI) and the time spent preparing meals, and (ii) the association between adult BMI and time spent shopping for food. Data on food habits and body size were collected with a self-report survey of ethnically diverse adults between the ages of 17 and 70 at a large university. The survey was used to recruit people to participate in nutrition or appetite studies. Among other data, the survey collected demographic data (gender, race/ethnicity), minutes per week spent in preparing meals and minutes per week spent grocery shopping. Height and weight were self-reported and used to calculate BMI. The study population consisted of 689 subjects, of which 276 were male and 413 were female. The mean age was 23.5 years, with a median age of 21 years. The fraction of subjects with BMI less than 24.9 was 65%, between 25 and 29.9 was 26%, and 30 or greater was 9%. Analysis of variation was used to examine associations between food preparation time and BMI. ^ The results of the study showed that there were no significant statistical association between adult healthy weight, overweight and obesity with either food preparation time and grocery shopping time. Of those in the sample who reported preparing food, the mean food preparation time per week for the healthy weight, overweight, and obese groups were 12.8 minutes, 12.3 minutes, and 11.6 minutes respectively. Similarly, the mean weekly grocery shopping for healthy, overweight, and obese groups were 60.3 minutes per week (8.6min./day), 61.4 minutes (8.8min./day), and 57.3 minutes (8.2min./day), respectively. Since this study was conducted through a University campus, it is assumed that most of the sample was students, and a percentage might have been utilizing meal plans on campus, and thus, would have reported little meal preparation or grocery shopping time. Further research should examine the relationships between meal preparation time and time spent shopping for food in a sample that is more representative of the general public. In addition, most people spent very little time preparing food, and thus, health promotion programs for this population need to focus on strategies for preparing quick meals or eating in restaurants/cafeterias. ^
Resumo:
The purpose of this study is to evaluate the theory-based Eat 5 nutrition badge. It is designed to increase fruit and vegetable (F&V) intake in 4th-6th grade junior Girl Scouts. Twenty-two troops were recruited and randomized by grade level (4th, 5th, 6th, or mixed) into either the intervention or control conditions. The leaders in the intervention condition received a brief training and the materials and conducted the program with their troops during four meetings. The Girl Scouts in the intervention condition completed 1-day Food Frequency Questionnaires and Nutrition Questionnaires both before and after completing the Eat 5 badge, and a third measurement of F&V intake three months after the posttest. Girl Scouts in the control condition were only evaluated at the three time periods.^ The primary hypotheses were that the Girl Scouts in the intervention condition would increase their daily intake of fruits and vegetables at both the posttest and three months later, compared to the Girl Scouts in the control condition. Other study questions investigated the impact of the Eat 5 program on intervening variables such as knowledge, self-efficacy, barriers, norms, F&V preference, and F&V selection and preparation skills.^ A nested ANOVA, with troop as the unit of analysis nested within condition, was used to assess the effects of the program. Pretest F&V intake and grade level were used as covariates. Pretest mean F&V intake for the total sample of 210 girls was 2.50 servings per day; 3.0 for the intervention group (n = 101). Significant increases in F&V intake (to 3.4 servings per day), knowledge, and fruit and vegetable preference were found for the intervention condition troops compared to the troops in the control condition. Three months later, the mean F&V intake had returned to pretest levels.^ This study indicates that social groups such as Girl Scouts can provide a channel for nutrition education. Long term effects were not sustained by the intervention; a possible cause was the lack of change in self-efficacy. Therefore, additional interventions are recommended such as booster lessons to maintain increased F&V intake by Girl Scouts. ^
Resumo:
It is the aim of this paper to examine iron supplementation programs which receive funding from United States Agency for International Development (USAID) but approach combating iron deficiency anemia in two vastly different ways. A brief literature review and background information on iron deficiencies and the differences between supplementation programs and micronutrient fortification were reviewed. Two non-governmental organizations (NGO's) were examined for this paper: the Food and Nutrition Technical Assistance II (FANTA) and the MicroNutrient Initiative. The FANTA program included an educational component to their supplementation program while the MicroNutrient Initiative solely used supplementation of micronutrients to their population. Methods used were cost-benefit analysis and cost-effectiveness analysis to determine the overall effectiveness of each program in reducing iron deficiency anemia in each population, if the added costs of the incentives in the FANTA program changed the cost-effectiveness of the program compared to the MicroNutrient Initiative program and to determine which program imparted the greatest benefit to each population by reducing the disease burden in Disability Adjusted Life Years (DALY). Results showed that the unit cost of the FANTA program per person was higher than the MicroNutrient Initiative program due to the educational component. The FANTA program reduced iron deficiency anemia less overall but cost less for each percentage point of anemia decreased in their respective populations. The MicroNutrient Initiative program had a better benefit cost ratio for the populations it served. The MicroNutrient Initiative's large scale program imparted many advantages by reducing unit cost per person and decreasing iron deficiency anemia. The FANTA program was more effective at decreasing iron deficiency anemia with less money: $5,660 per 1% decrease in iron deficiency anemia versus $18,450 per 1% decrease in iron deficiency anemia for the MicroNutrient Initiative program. ^ In conclusion, economic analysis cannot measure all of the benefits associated with programs that contain an educational component or large scale supplementation. More information needs to be gathered by NGOs and reported to USAID, such as detailed prevalence rates of iron deficiency anemia among the populations served. Further research is needed to determine the effects an educational supplementation program has on compliance rates of participants and motivation to participate in supplementation programs whose aim is to decrease iron deficiency anemia in a targeted population.^
Resumo:
Background. Previous studies suggest an association between timing of introduction of solid food and increased risk of obesity in pre-school aged children, but no study included a representative sample of US children. We sought to examine whether there was any association between the timing of solid food introduction and overweight/obesity in pre-school aged children. Design/methods. Cross-sectional study of a nationally representative sample (N=2050) of US children aged 2 to 5 years with information on infant feeding practices and measured weight and height from the National Health and Nutrition Examination Survey 2003–2008. The main outcome measure was BMI for age and sex ≥ 85th percentile. The main exposure was timing of solid food introduction at < 4, 4–5, or ≥ 6 months of age. Binomial logistic regression was used in the analysis controlling for child's sex, birth weight and breastfeeding status as well as maternal age at birth, smoking status and socio-demographic variables. Results. Two thousand and fifty children were included in the sample; 51% male and 49% female; 57.1% Non-Hispanic White, 21.9% Hispanic, 14.0% Non-Hispanic Black, and 7% other race/ethnicity. Twenty-two percent of the children were overweight or obese. Sixty-nine percent were breastfed or fed breast milk at birth and 36% continued breastfeeding for ≥ six months. Solid foods were introduced before 4 months of age for 11.2% of the children; 30.3% received solid foods between 4 to 5 months; with 58.6% receiving solid foods at 6 months or later. Timing of solid food introduction was not associated with weight status (OR= 1.36, 95% CI [0.83–2.24]). Formula-fed infants and infants breastfed for < 4 months had increased odds of overweight and obesity (OR=1.54, 95% CI [1.05–2.27] and OR= 1.60, 95% CI [1.05–2.44], respectively) when compared to infants breastfed for ≥ 6 months. Conclusion. Timing of solid food introduction was not associated with weight status in a national sample of US children ages 2 to 5 years. More focus should be placed on promoting breastfeeding and healthy infant feeding practices as strategies to prevent obesity in children. ^
Resumo:
Problem: Medical and veterinary students memorize facts but then have difficulty applying those facts in clinical problem solving. Cognitive engineering research suggests that the inability of medical and veterinary students to infer concepts from facts may be due in part to specific features of how information is represented and organized in educational materials. First, physical separation of pieces of information may increase the cognitive load on the student. Second, information that is necessary but not explicitly stated may also contribute to the student’s cognitive load. Finally, the types of representations – textual or graphical – may also support or hinder the student’s learning process. This may explain why students have difficulty applying biomedical facts in clinical problem solving. Purpose: To test the hypothesis that three specific aspects of expository text – the patial distance between the facts needed to infer a rule, the explicitness of information, and the format of representation – affected the ability of students to solve clinical problems. Setting: The study was conducted in the parasitology laboratory of a college of veterinary medicine in Texas. Sample: The study subjects were a convenience sample consisting of 132 second-year veterinary students who matriculated in 2007. The age of this class upon admission ranged from 20-52, and the gender makeup of this class consisted of approximately 75% females and 25% males. Results: No statistically significant difference in student ability to solve clinical problems was found when relevant facts were placed in proximity, nor when an explicit rule was stated. Further, no statistically significant difference in student ability to solve clinical problems was found when students were given different representations of material, including tables and concept maps. Findings: The findings from this study indicate that the three properties investigated – proximity, explicitness, and representation – had no statistically significant effect on student learning as it relates to clinical problem-solving ability. However, ad hoc observations as well as findings from other researchers suggest that the subjects were probably using rote learning techniques such as memorization, and therefore were not attempting to infer relationships from the factual material in the interventions, unless they were specifically prompted to look for patterns. A serendipitous finding unrelated to the study hypothesis was that those subjects who correctly answered questions regarding functional (non-morphologic) properties, such as mode of transmission and intermediate host, at the family taxonomic level were significantly more likely to correctly answer clinical case scenarios than were subjects who did not correctly answer questions regarding functional properties. These findings suggest a strong relationship (p < .001) between well-organized knowledge of taxonomic functional properties and clinical problem solving ability. Recommendations: Further study should be undertaken investigating the relationship between knowledge of functional taxonomic properties and clinical problem solving ability. In addition, the effect of prompting students to look for patterns in instructional material, followed by the effect of factors that affect cognitive load such as proximity, explicitness, and representation, should be explored.