957 resultados para Multivariate analysis of variance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The article offers a systematic analysis of the comparative trajectory of international democratic change. In particular, it focuses on the resulting convergence or divergence of political systems, borrowing from the literatures on institutional change and policy convergence. To this end, political-institutional data in line with Arend Lijphart’s (1999, 2012) empirical theory of democracy for 24 developed democracies between 1945 and 2010 are analyzed. Heteroscedastic multilevel models allow for directly modeling the development of the variance of types of democracy over time, revealing information about convergence, and adding substantial explanations. The findings indicate that there has been a trend away from extreme types of democracy in single cases, but no unconditional trend of convergence can be observed. However, there are conditional processes of convergence. In particular, economic globalization and the domestic veto structure interactively influence democratic convergence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Patients undergoing laparoscopic Roux-en-Y gastric bypass (LRYGB) often have substantial comorbidities, which must be taken into account to appropriately assess expected postoperative outcomes. The Charlson/Deyo and Elixhauser indices are widely used comorbidity measures, both of which also have revised algorithms based on enhanced ICD-9-CM coding. It is currently unclear which of the existing comorbidity measures best predicts early postoperative outcomes following LRYGB. METHODS: Using the Nationwide Inpatient Sample, patients 18 years or older undergoing LRYGB for obesity between 2001 and 2008 were identified. Comorbidities were assessed according to the original and enhanced Charlson/Deyo and Elixhauser indices. Using multivariate logistic regression, the following early postoperative outcomes were assessed: overall postoperative complications, length of hospital stay, and conversion to open surgery. Model performance for the four comorbidity indices was assessed and compared using C-statistics and the Akaike's information criterion (AIC). RESULTS: A total of 70,287 patients were included. Mean age was 43.1 years (SD, 10.8), 81.6 % were female and 60.3 % were White. Both the original and enhanced Elixhauser indices modestly outperformed the Charlson/Deyo in predicting the surgical outcomes. All four models had similar C-statistics, but the original Elixhauser index was associated with the smallest AIC for all of the surgical outcomes. CONCLUSIONS: The original Elixhauser index is the best predictor of early postoperative outcomes in our cohort of patients undergoing LRYGB. However, differences between the Charlson/Deyo and Elixhauser indices are modest, and each of these indices provides clinically relevant insight for predicting early postoperative outcomes in this high-risk patient population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE To investigate the long-term prognostic implications of coronary calcification in patients undergoing percutaneous coronary intervention for obstructive coronary artery disease. METHODS Patient-level data from 6296 patients enrolled in seven clinical drug-eluting stents trials were analysed to identify in angiographic images the presence of severe coronary calcification by an independent academic research organisation (Cardialysis, Rotterdam, The Netherlands). Clinical outcomes at 3-years follow-up including all-cause mortality, death-myocardial infarction (MI), and the composite end-point of all-cause death-MI-any revascularisation were compared between patients with and without severe calcification. RESULTS Severe calcification was detected in 20% of the studied population. Patients with severe lesion calcification were less likely to have undergone complete revascularisation (48% vs 55.6%, p<0.001) and had an increased mortality compared with those without severely calcified arteries (10.8% vs 4.4%, p<0.001). The event rate was also high in patients with severely calcified lesions for the combined end-point death-MI (22.9% vs 10.9%; p<0.001) and death-MI- any revascularisation (31.8% vs 22.4%; p<0.001). On multivariate Cox regression analysis, including the Syntax score, the presence of severe coronary calcification was an independent predictor of poor prognosis (HR: 1.33 95% CI 1.00 to 1.77, p=0.047 for death; 1.23, 95% CI 1.02 to 1.49, p=0.031 for death-MI, and 1.18, 95% CI 1.01 to 1.39, p=0.042 for death-MI- any revascularisation), but it was not associated with an increased risk of stent thrombosis. CONCLUSIONS Patients with severely calcified lesions have worse clinical outcomes compared to those without severe coronary calcification. Severe coronary calcification appears as an independent predictor of worse prognosis, and should be considered as a marker of advanced atherosclerosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION Spinal disc herniation, lumbar spinal stenosis and spondylolisthesis are known to be leading causes of lumbar back pain. The cost of low back pain management and related operations are continuously increasing in the healthcare sector. There are many studies regarding complications after spine surgery but little is known about the factors predicting the length of stay in hospital. The purpose of this study was to identify these factors in lumbar spine surgery in order to adapt the postoperative treatment. MATERIAL AND METHODS The current study was carried out as a post hoc analysis on the basis of the German spine registry. Patients who underwent lumbar spine surgery by posterior surgical access and with posterior fusion and/or rigid stabilization, whereby procedures with dynamic stabilization were excluded. Patient characteristics were tested for association with length of stay (LOS) using bivariate and multivariate analyses. RESULTS A total of 356 patients met the inclusion criteria. The average age of all patients was 64.6 years and the mean LOS was 11.9 ± 6.0 days with a range of 2-44 days. Independent factors that were influencing LOS were increased age at the time of surgery, higher body mass index, male gender, blood transfusion of 1-2 erythrocyte concentrates and the presence of surgical complications. CONCLUSION Identification of predictive factors for prolonged LOS may allow for estimation of patient hospitalization time and for optimization of postoperative care. In individual cases this may result of a reduction in the LOS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION Patients admitted to intensive care following surgery for faecal peritonitis present particular challenges in terms of clinical management and risk assessment. Collaborating surgical and intensive care teams need shared perspectives on prognosis. We aimed to determine the relationship between dynamic assessment of trends in selected variables and outcomes. METHODS We analysed trends in physiological and laboratory variables during the first week of intensive care unit (ICU) stay in 977 patients at 102 centres across 16 European countries. The primary outcome was 6-month mortality. Secondary endpoints were ICU, hospital and 28-day mortality. For each trend, Cox proportional hazards (PH) regression analyses, adjusted for age and sex, were performed for each endpoint. RESULTS Trends over the first 7 days of the ICU stay independently associated with 6-month mortality were worsening thrombocytopaenia (mortality: hazard ratio (HR) = 1.02; 95% confidence interval (CI), 1.01 to 1.03; P <0.001) and renal function (total daily urine output: HR =1.02; 95% CI, 1.01 to 1.03; P <0.001; Sequential Organ Failure Assessment (SOFA) renal subscore: HR = 0.87; 95% CI, 0.75 to 0.99; P = 0.047), maximum bilirubin level (HR = 0.99; 95% CI, 0.99 to 0.99; P = 0.02) and Glasgow Coma Scale (GCS) SOFA subscore (HR = 0.81; 95% CI, 0.68 to 0.98; P = 0.028). Changes in renal function (total daily urine output and renal component of the SOFA score), GCS component of the SOFA score, total SOFA score and worsening thrombocytopaenia were also independently associated with secondary outcomes (ICU, hospital and 28-day mortality). We detected the same pattern when we analysed trends on days 2, 3 and 5. Dynamic trends in all other measured laboratory and physiological variables, and in radiological findings, changes inrespiratory support, renal replacement therapy and inotrope and/or vasopressor requirements failed to be retained as independently associated with outcome in multivariate analysis. CONCLUSIONS Only deterioration in renal function, thrombocytopaenia and SOFA score over the first 2, 3, 5 and 7 days of the ICU stay were consistently associated with mortality at all endpoints. These findings may help to inform clinical decision making in patients with this common cause of critical illness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recurrent wheezing or asthma is a common problem in children that has increased considerably in prevalence in the past few decades. The causes and underlying mechanisms are poorly understood and it is thought that a numb er of distinct diseases causing similar symptoms are involved. Due to the lack of a biologically founded classification system, children are classified according to their observed disease related features (symptoms, signs, measurements) into phenotypes. The objectives of this PhD project were a) to develop tools for analysing phenotypic variation of a disease, and b) to examine phenotypic variability of wheezing among children by applying these tools to existing epidemiological data. A combination of graphical methods (multivariate co rrespondence analysis) and statistical models (latent variables models) was used. In a first phase, a model for discrete variability (latent class model) was applied to data on symptoms and measurements from an epidemiological study to identify distinct phenotypes of wheezing. In a second phase, the modelling framework was expanded to include continuous variability (e.g. along a severity gradient) and combinations of discrete and continuo us variability (factor models and factor mixture models). The third phase focused on validating the methods using simulation studies. The main body of this thesis consists of 5 articles (3 published, 1 submitted and 1 to be submitted) including applications, methodological contributions and a review. The main findings and contributions were: 1) The application of a latent class model to epidemiological data (symptoms and physiological measurements) yielded plausible pheno types of wheezing with distinguishing characteristics that have previously been used as phenotype defining characteristics. 2) A method was proposed for including responses to conditional questions (e.g. questions on severity or triggers of wheezing are asked only to children with wheeze) in multivariate modelling.ii 3) A panel of clinicians was set up to agree on a plausible model for wheezing diseases. The model can be used to generate datasets for testing the modelling approach. 4) A critical review of methods for defining and validating phenotypes of wheeze in children was conducted. 5) The simulation studies showed that a parsimonious parameterisation of the models is required to identify the true underlying structure of the data. The developed approach can deal with some challenges of real-life cohort data such as variables of mixed mode (continuous and categorical), missing data and conditional questions. If carefully applied, the approach can be used to identify whether the underlying phenotypic variation is discrete (classes), continuous (factors) or a combination of these. These methods could help improve precision of research into causes and mechanisms and contribute to the development of a new classification of wheezing disorders in children and other diseases which are difficult to classify.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The attentional blink (AB) is a fundamental limitation of the ability to select relevant information from irrelevant information. It can be observed with the detection rate in an AB task as well as with the corresponding P300 amplitude of the event-related potential. In previous research, however, correlations between these two levels of observation were weak and rather inconsistent. A possible explanation of this finding might be that multiple processes underlie the AB and, thus, obscure a possible relationship between AB-related detection rate and the corresponding P300 amplitude. The present study investigated this assumption by applying a fixed-links modeling approach to represent behavioral individual differences in the AB as a latent variable. Concurrently, this approach enabled us to control for additional sources of variance in AB performance by deriving two additional latent variables. The correlation between the latent variable reflecting behavioral individual differences in AB magnitude and a corresponding latent variable derived from the P300 amplitude was high (r=.70). Furthermore, this correlation was considerably stronger than the correlations of other behavioral measures of the AB magnitude with their psychophysiological counterparts (all rs<.40). Our findings clearly indicate that the systematic disentangling of various sources of variance by utilizing the fixed-links modeling approach is a promising tool to investigate behavioral individual differences in the AB and possible psychophysiological correlates of these individual differences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Do siblings of centenarians tend to have longer life spans? To answer this question, life spans of 184 siblings for 42 centenarians have been evaluated. Two important questions have been addressed in analyzing the sibling data. First, a standard needs to be established, to which the life spans of 184 siblings are compared. In this report, an external reference population is constructed from the U.S. life tables. Its estimated mortality rates are treated as baseline hazards from which the relative mortality of the siblings are estimated. Second, the standard survival models which assume independent observations are invalid when correlation within family exists, underestimating the true variance. Methods that allow correlations are illustrated by three different methods. First, the cumulative relative excess mortality between siblings and their comparison group is calculated and used as an effective graphic tool, along with the Product Limit estimator of the survival function. The variance estimator of the cumulative relative excess mortality is adjusted for the potential within family correlation using Taylor linearization approach. Second, approaches that adjust for the inflated variance are examined. They are adjusted one-sample log-rank test using design effect originally proposed by Rao and Scott in the correlated binomial or Poisson distribution setting and the robust variance estimator derived from the log-likelihood function of a multiplicative model. Nether of these two approaches provide correlation estimate within families, but the comparison with the comparison with the standard remains valid under dependence. Last, using the frailty model concept, the multiplicative model, where the baseline hazards are known, is extended by adding a random frailty term that is based on the positive stable or the gamma distribution. Comparisons between the two frailty distributions are performed by simulation. Based on the results from various approaches, it is concluded that the siblings of centenarians had significant lower mortality rates as compared to their cohorts. The frailty models also indicate significant correlations between the life spans of the siblings. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction and objective. A number of prognostic factors have been reported for predicting survival in patients with renal cell carcinoma. Yet few studies have analyzed the effects of those factors at different stages of the disease process. In this study, different stages of disease progression starting from nephrectomy to metastasis, from metastasis to death, and from evaluation to death were evaluated. ^ Methods. In this retrospective follow-up study, records of 97 deceased renal cell carcinoma (RCC) patients were reviewed between September 2006 to October 2006. Patients with TNM Stage IV disease before nephrectomy or with cancer diagnoses other than RCC were excluded leaving 64 records for analysis. Patient TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were analyzed in relation to time to metastases. Time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from metastases to death. Finally, analysis of laboratory values at time of evaluation, Eastern Cooperative Oncology Group performance status (ECOG), UCLA Integrated Staging System (UISS), time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from evaluation to death. Linear regression and Cox Proportional Hazard (univariate and multivariate) was used for testing significance. Kaplan-Meier Log-Rank test was used to detect any significance between groups at various endpoints. ^ Results. Compared to negative lymph nodes at time of nephrectomy, a single positive lymph node had significantly shorter time to metastasis (p<0.0001). Compared to other histological types, clear cell histology had significant metastasis free survival (p=0.003). Clear cell histology compared to other types (p=0.0002 univariate, p=0.038 multivariate) and time to metastasis with log conversion (p=0.028) significantly affected time from metastasis to death. A greater than one year and greater than two year metastasis free interval, compared to patients that had metastasis before one and two years, had statistically significant survival benefit (p=0.004 and p=0.0318). Time from evaluation to death was affected by greater than one year metastasis free interval (p=0.0459), alcohol consumption (p=0.044), LDH (p=0.006), ECOG performance status (p<0.001), and hemoglobin level (p=0.0092). The UISS risk stratified the patient population in a statistically significant manner for survival (p=0.001). No other factors were found to be significant. ^ Conclusion. Clear cell histology is predictive for both time to metastasis and metastasis to death. Nodal status at time of nephrectomy may predict risk of metastasis. The time interval to metastasis significantly predicts time from metastasis to death and time from evaluation to death. ECOG performance status, and hemoglobin levels predicts survival outcome at evaluation. Finally, UISS appropriately stratifies risk in our population. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We conducted a nested case-control study to determine the significant risk factors for developing encephalitis from West Nile virus (WNV) infection. The purpose of this research project was to expand the previously published Houston study of 2002–2004 patients to include data on Houston patients from four additional years (2005–2008) to determine if there were any differences in risk factors shown to be associated with developing the more severe outcomes of WNV infection, encephalitis and death, by having this larger sample size. A re-analysis of the risk factors for encephalitis and death was conducted on all of the patients from 2002–2008 and was the focus of this proposed research. This analysis allowed for the determination to be made that there are differences in the outcome in the risk factors for encephalitis and death with an increased sample size. Retrospective medical chart reviews were completed for the 265 confirmed WNV hospitalized patients; 153 patients had encephalitis (WNE), 112 had either viral syndrome with fever (WNF) or meningitis (WNM); a total of 22 patients died. Univariate logistic regression analyses on demographic, comorbidities, and social risk factors was conducted in a similar manner as in the previously conducted study to determine the risk factors for developing encephalitis from WNV. A multivariate model was developed by using model building strategies for the multivariate logistic regression analysis. The hypothesis of this study was that there would be additional risk factors shown to be significant with the increase in sample size of the dataset. This analysis with a greater sample size and increased power supports the hypothesis in that there were additional risk factors shown to be statistically associated with the more severe outcomes of WNV infection (WNE or death). Based on univariate logistic regression results, these data showed that even though age of 20–44 years was statistically significant as a protecting effect for developing WNE in the original study, the expanded sample lacked significance. This study showed a significant WNE risk factor to be chronic alcohol abuse, when it was not significant in the original analysis. Other WNE risk factors identified in this analysis that showed to be significant but were not significant in the original analysis were cancer not in remission > 5 years, history of stroke, and chronic renal disease. When comparing the two analyses with death as an outcome, two risk factors that were shown to be significant in the original analysis but not in the expanded dataset analysis were diabetes mellitus and immunosuppression. Three risk factors shown to be significant in this expanded analysis but were not significant in the original study were illicit drug use, heroin or opiate use, and injection drug use. However, with the multiple logistic regression models, the same independent risk factors for developing encephalitis of age and history of hypertension including drug induced hypertension were consistent in both studies.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Childhood immunization programs have dramatically reduced the morbidity and mortality associated with vaccine-preventable diseases. Proper documentation of immunizations that have been administered is essential to prevent duplicate immunization of children. To help improve documentation, immunization information systems (IISs) have been developed. IISs are comprehensive repositories of immunization information for children residing within a geographic region. The two models for participation in an IIS are voluntary inclusion, or "opt-in," and voluntary exclusion, or "opt-out." In an opt-in system, consent must be obtained for each participant, conversely, in an opt-out IIS, all children are included unless procedures to exclude the child are completed. Consent requirements for participation vary by state; the Texas IIS, ImmTrac, is an opt-in system.^ Objectives. The specific objectives are to: (1) Evaluate the variance among the time and costs associated with collecting ImmTrac consent at public and private birthing hospitals in the Greater Houston area; (2) Estimate the total costs associated with collecting ImmTrac consent at selected public and private birthing hospitals in the Greater Houston area; (3) Describe the alternative opt-out process for collecting ImmTrac consent at birth and discuss the associated cost savings relative to an opt-in system.^ Methods. Existing time-motion studies (n=281) conducted between October, 2006 and August, 2007 at 8 birthing hospitals in the Greater Houston area were used to assess the time and costs associated with obtaining ImmTrac consent at birth. All data analyzed are deidentified and contain no personal information. Variations in time and costs at each location were assessed and total costs per child and costs per year were estimated. The cost of an alternative opt-out system was also calculated.^ Results. The median time required by birth registrars to complete consent procedures varied from 72-285 seconds per child. The annual costs associated with obtaining consent for 388,285 newborns in ImmTrac's opt-in consent process were estimated at $702,000. The corresponding costs of the proposed opt-out system were estimated to total $194,000 per year. ^ Conclusions. Substantial variation in the time and costs associated with completion of ImmTrac consent procedures were observed. Changing to an opt-out system for participation could represent significant cost savings. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relative influence of race, income, education, and Food Stamp Program participation/nonparticipation on the food and nutrient intake of 102 fecund women ages 18-45 years in a Florida urban clinic population was assessed using the technique of multiple regression analysis. Study subgroups were defined by race and Food Stamp Program participation status. Education was found to have the greatest influence on food and nutrient intake. Race was the next most influential factor followed in order by Food Stamp Program participation and income. The combined effect of the four independent variables explained no more than 19 percent of the variance for any of the food and nutrient intake variables. This would indicate that a more complex model of influences is needed if variations in food and nutrient intake are to be fully explained.^ A socioeconomic questionnaire was administered to investigate other factors of influence. The influence of the mother, frequency and type of restaurant dining, and perceptions of food intake and weight were found to be factors deserving further study.^ Dietary data were collected using the 24-hour recall and food frequency checklist. Descriptive dietary findings indicated that iron and calcium were nutrients where adequacy was of concern for all study subgroups. White Food Stamp Program participants had the greatest number of mean nutrient intake values falling below the 1980 Recommended Dietary Allowances (RDAs). When Food Stamp Program participants were contrasted to nonparticipants, mean intakes of six nutrients (kilocalories, calcium, iron, vitamin A, thiamin, and riboflavin) were below the 1980 RDA compared to five mean nutrient intakes (kilocalories, calcium, iron, thiamin and riboflavin) for the nonparticipants. Use of the Index of Nutritional Quality (INQ), however, revealed that the quality of the diet of Food Stamp Program participants per 1000 kilocalories was adequate with exception of calcium and iron. Intakes of these nutrients were also not adequate on a 1000 kilocalorie basis for the nonparticipant group. When mean nutrient intakes of the groups were compared using Student's t-test oleicacid intake was the only significant difference found. Being a nonparticipant in the Food Stamp Program was found to be associated with more frequent consumption of cookies, sweet rolls, doughnuts, and honey. The findings of this study contradict the negative image of the Food Stamp Program participant and emphasize the importance of education. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation develops and tests through path analysis a theoretical model to explain how socioeconomic, socioenvironmental, and biologic risk factors simultaneously influence each other to further produce short-term, depressed growth in preschoolers. Three areas of risk factors were identified: child's proximal environment, maturational stage, and biological vulnerability. The theoretical model represented both the conceptual framework and the nature and direction of the hypotheses. Original research completed in 1978-80 and in 1982 provided the background data. It was analyzed first by nested-analysis of variance, followed by path analysis. The study provided evidence of mild iron deficiency and gastrointestinal symptomatology in the etiology of depressed, short-term weight gain. Also, there was evidence suggesting that family resources for material and social survival significantly contribute to the variability of short-term, age-adjusted growth velocity. These results challenge current views of unifocal intervention, whether for prevention or control. For policy formulations, though, the mechanisms underlying any set of interlaced relationships must be decoded. Theoretical formulations here proposed should be reassessed under a more extensive research design. It is suggested that studies should be undertaken where social changes are actually in progress; otherwise, nutritional epidemiology in developing countries operates somewhere between social reality and research concepts, with little grasp of its real potential. The study stresses that there is a connection between substantive theory, empirical observation, and policy issues. ^