897 resultados para multivariable regression
Resumo:
Objective. One facet of cancer care that often goes ignored is comorbidities, or diseases that exist in concert with cancer. Comorbid conditions may affect survival by influencing treatment decisions and prognosis. The purpose of this secondary data analysis was to identify whether a history of cardiovascular comorbidities among ovarian cancer patients influenced survival time at the University of Texas M. D. Anderson Cancer Center. The parent study, Project Peace, has a longitudinal design with an embedded randomized efficacy study which seeks to improve detection of depressive disorders in ovarian, peritoneal, and fallopian tube cancers. ^ Methods. Survival time was calculated for the 249 ovarian cancer patients abstracted by Project Peace staff. Cardiovascular comorbidities were documented as present, based upon information from medical records in addition to self reported comorbidities in a baseline study questionnaire. Kaplan-Meier survival curves were used to compare survival time among patients with a presence or absence of particular cardiovascular comorbidities. Cox Regression proportional models accounted for multivariable factors such as age, staging, family history of cardiovascular comorbidities, and treatment. ^ Results. Among our patient population, there was a statistically significant relationship between shorter survival time and a history of thrombosis, pericardial disease/tamponade, or COPD/pulmonary hypertension. Ovarian cancer patients with a history of thrombosis lived approximately half as long as patients without thrombosis (58.06 months vs. 121.55 months; p=.001). In addition, patients who suffered from pericardial disease/tamponade had poorer survival than those without a history of pericardial disease/tamponade (48 months vs. 80.07 months; p=.002). Ovarian cancer patients with a history of COPD or pulmonary hypertension had a median survival of 60.2 months, while the median survival for patients without these comorbidities was 80.2 months (p=.014). ^ Conclusion. Especially because of its relatively lower survival rate, greater emphasis needs to be placed on the potential influence of cardiovascular comorbid conditions in ovarian cancer.^
Resumo:
Background. Colorectal cancer (CRC) is the third most commonly diagnosed cancer (excluding skin cancer) in both men and women in the United States, with an estimated 148,810 new cases and 49,960 deaths in 2008 (1). Racial/ethnic disparities have been reported across the CRC care continuum. Studies have documented racial/ethnic disparities in CRC screening (2-9), but only a few studies have looked at these differences in CRC screening over time (9-11). No studies have compared these trends in a population with CRC and without cancer. Additionally, although there is evidence suggesting that hospital factors (e.g. teaching hospital status and NCI designation) are associated with CRC survival (12-16), no studies have sought to explain the racial/ethnic differences in survival by looking at differences in socio-demographics, tumor characteristics, screening, co-morbidities, treatment, as well as hospital characteristics. ^ Objectives and Methods. The overall goals of this dissertation were to describe the patterns and trends of racial/ethnic disparities in CRC screening (i.e. fecal occult blood test (FOBT), sigmoidoscopy (SIG) and colonoscopy (COL)) and to determine if racial/ethnic disparities in CRC survival are explained by differences in socio-demographic, tumor characteristics, screening, co-morbidities, treatment, and hospital factors. These goals were accomplished in a two-paper format.^ In Paper 1, "Racial/Ethnic Disparities and Trends in Colorectal Cancer Screening in Medicare Beneficiaries with Colorectal Cancer and without Cancer in SEER Areas, 1992-2002", the study population consisted of 50,186 Medicare beneficiaries diagnosed with CRC from 1992 to 2002 and 62,917 Medicare beneficiaries without cancer during the same time period. Both cohorts were aged 67 to 89 years and resided in 16 Surveillance, Epidemiology and End Results (SEER) regions of the United States. Screening procedures between 6 months and 3 years prior to the date of diagnosis for CRC patients and prior to the index date for persons without cancer were identified in Medicare claims. The crude and age-gender-adjusted percentages and odds ratios of receiving FOBT, SIG, or COL were calculated. Multivariable logistic regression was used to assess race/ethnicity on the odds of receiving CRC screening over time.^ Paper 2, "Racial/Ethnic Disparities in Colorectal Cancer Survival: To what extent are racial/ethnic disparities in survival explained by racial differences in socio-demographics, screening, co-morbidities, treatment, tumor or hospital characteristics", included a cohort of 50,186 Medicare beneficiaries diagnosed with CRC from 1992 to 2002 and residing in 16 SEER regions of the United States which were identified in the SEER-Medicare linked database. Survival was estimated using the Kaplan-Meier method. Cox proportional hazard modeling was used to estimate hazard ratios (HR) of mortality and 95% confidence intervals (95% CI).^ Results. The screening analysis demonstrated racial/ethnic disparities in screening over time among the cohort without cancer. From 1992 to 1995, Blacks and Hispanics were less likely than Whites to receive FOBT (OR=0.75, 95% CI: 0.65-0.87; OR=0.50, 95% CI: 0.34-0.72, respectively) but their odds of screening increased from 2000 to 2002 (OR=0.79, 95% CI: 0.72-0.85; OR=0.67, 95% CI: 0.54-0.75, respectively). Blacks and Hispanics were less likely than Whites to receive SIG from 1992 to 1995 (OR=0.75, 95% CI: 0.57-0.98; OR=0.29, 95% CI: 0.12-0.71, respectively), but their odds of screening increased from 2000 to 2002 (OR=0.79, 95% CI: 0.68-0.93; OR=0.50, 95% CI: 0.35-0.72, respectively).^ The survival analysis showed that Blacks had worse CRC-specific survival than Whites (HR: 1.33, 95% CI: 1.23-1.44), but this was reduced for stages I-III disease after full adjustment for socio-demographic, tumor characteristics, screening, co-morbidities, treatment and hospital characteristics (aHR=1.24, 95% CI: 1.14-1.35). Socioeconomic status, tumor characteristics, treatment and co-morbidities contributed to the reduction in hazard ratios between Blacks and Whites with stage I-III disease. Asians had better survival than Whites before (HR: 0.73, 95% CI: 0.64-0.82) and after (aHR: 0.80, 95% CI: 0.70-0.92) adjusting for all predictors for stage I-III disease. For stage IV, both Asians and Hispanics had better survival than Whites, and after full adjustment, survival improved (aHR=0.73, 95% CI: 0.63-0.84; aHR=0.74, 95% CI: 0.61-0.92, respectively).^ Conclusion. Screening disparities remain between Blacks and Whites, and Hispanics and Whites, but have decreased in recent years. Future studies should explore other factors that may contribute to screening disparities, such as physician recommendations and language/cultural barriers in this and younger populations.^ There were substantial racial/ethnic differences in CRC survival among older Whites, Blacks, Asians and Hispanics. Co-morbidities, SES, tumor characteristics, treatment and other predictor variables contributed to, but did not fully explain the CRC survival differences between Blacks and Whites. Future research should examine the role of quality of care, particularly the benefit of treatment and post-treatment surveillance, in racial disparities in survival.^
Resumo:
Background. A few studies have reported gender differences along the colorectal cancer (CRC) continuum but none has done so longitudinally to compare a cancer and a non-cancer populations.^ Objectives and Methods. To examine gender differences in colorectal cancer screening (CRCS); to examine trends in gender differences in CRC screening among two groups of patients (Medicare beneficiaries with and without cancer); to examine gender differences in CRC incidence; and to examine for any differences over time. In Paper 1, the study population consisted of men and women, ages 67–89 years, with CRC (73,666) or without any cancer (39,006), residing in 12 U.S. Surveillance Epidemiology and End-Results (SEER) regions. Crude and age-adjusted percentages and odds ratios of receiving fecal occult blood test (FOBT), sigmoidoscopy (SIG), or colonoscopy (COL) were calculated. Multivariable logistic regression was used to assess gender on the odds of receiving CRC screening over time.^ In Paper 2, age-adjusted incidence rates and proportions over time were reported across race, CRC subsite, CRC stage and SEER region for 373,956 patients, ages 40+ years, residing in 9 SEER regions and diagnosed with malignant CRC. ^ Results. Overall, women had higher CRC screening rates than men and screening rates in general were higher in the SEER sample of persons with CRC diagnosis. Significant temporal divergence in FOBT screening was observed between men and women in both cohorts. Although the largest temporal increases in screening rates were found for COL, especially among the cohort with CRC, little change in the gender gap was observed over time. Receipt of FOBT was significantly associated with female gender especially in the period of full Medicare coverage. Receipt of COL was also significantly associated with male gender, especially in the period of limited Medicare coverage.^ Overall, approximately equal numbers of men (187,973) and women (185,983) were diagnosed with malignant CRC. Men had significantly higher age-adjusted CRC incidence rates than women across all categories of age, race, subsite, stage and SEER region even though rates declined in all categories over time. Significant moderate increases in rate difference occurred among 40-59 year olds; significant reductions occurred among patients age 70+, within subsite rectum, unstaged and distant stage CRC, and eastern and western SEER regions. ^ Conclusions. Persistent gender differences in CRC incidence across time may have implications for gender-based interventions that take age into consideration. A shift toward proximal cancer was observed over time for both genders, but the high proportion of men who develop rectal cancer suggests that a greater proportion of men may need to be targeted with newer screening methods such as fecal DNA or COL. Although previous reports have documented higher CRC screening among men, higher incidence of CRC observed among men suggests that higher risk categories of men are probably not being reached. FOBT utilization rates among women have increased over time and the gender gap has widened between 1998 and 2005. COL utilization is associated with male gender but the differences over time are small.^
Resumo:
Objectives. Triple Negative Breast Cancer (TNBC) lack expression of estrogen receptors (ER), progesterone receptors (PR), and absence of Her2 gene amplification. Current literature has identified TNBC and over-expression of cyclo-oxygenase-2 (COX-2) protein in primary breast cancer to be independent markers of poor prognosis in terms of overall and distant disease free survival. The purpose of this study was to compare COX-2 over-expression in TNBC patients to those patients who expressed one or more of the three tumor markers (i.e. ER, and/or PR, and/or Her2).^ Methods. Using a secondary data analysis, a cross-sectional design was implemented to examine the association of interest. Data collected from two ongoing protocols titled "LAB04-0657: a model for COX-2 mediated bone metastasis (Specific aim 3)" and "LAB04-0698: correlation of circulating tumor cells and COX-2 expression in primary breast cancer metastasis" was used for analysis. A sample of 125 female patients was analyzed using Chi-square tests and logistic regression models. ^ Results. COX-2 over-expression was present in 33% (41/125) and 28% (35/124) patients were identified as having TNBC. TNBC status was associated with elevated COX-2 expression (OR= 3.34; 95% CI= 1.40–8.22) and high tumor grade (OR= 4.09; 95% CI= 1.58–10.82). In a multivariable analysis, TNBC status was an important predictor of COX-2 expression after adjusting for age, menopausal status, BMI, and lymph node status (OR= 3.31; 95% CI: 1.26–8.67; p=0.01).^ Conclusion. TNBC is associated with COX-2 expression—a known marker of poor prognosis in patients with operable breast cancer. Replication of these results in a study with a larger sample size, or a future randomized clinical trial demonstrating an improved prognosis with COX-2 suppression in these patients would support this hypothesis.^
Resumo:
Earlier age at puberty is a known risk factor for breast cancer and suspected to influence prostate cancer; yet few studies have assessed early life risk factors for puberty. The overall objectives was to determine the relationship between birth-weight-for-gestational-age (BWGA), weight gain in infancy and pubertal status in girls and boys at 10.8 and 11.8 years and who were born of preeclamptic (PE) and normotensive (NT) mothers. Data for this study were collected from hospital and public health medical records and at a follow-up visit at 10.8 and 11.8 years for girls and boys, respectively. We used stratified analysis and multivariable logistic regression modeling to assess effect measure modifier and to determine the relationship between BWGA, weight gain in infancy and childhood and pubertal status, respectively. ^ There was no difference in the relationship between BWGA and pubertal status by maternal PE status for girls and boys; however, there was a non-significant increase in the odds of having been born small-for-gestational-age (SGA) in girls who were pubertal for breast or pubic hair Tanner stage 2+ compared to those who B1 or PH1. In contrast, boys who were pubertal for genital and pubic hair Tanner stage 2+ had lower odds of having been born SGA than those who were prepubertal for G1 or PH1. ^ In girls who were pubertal for breast development, the odds of having gained one additional unit SD for weight was highest between 3 to 6 months and 6-12 months for those who were B2+ vs. B1. For pubic hair development, weight gain between 6-12 months had the greatest effect for girls of PE mothers only. In boys, there were no statistically significant associations between weight gain and genital Tanner stage at any of the intervals; however, weight gain between 3-6 months did affect pubic hair tanner stage in boys of NT mothers. This study provide important evidence regarding the role of SGA and weight gain at specific age intervals on puberty; however, larger studies need to shed light on modifiable exposures for behavioral interventions in pregnancy, postpartum and in childhood.^
Resumo:
Purpose. To evaluate trends in the utilization of head, abdominal, thoracic and other body regions CTs in the management of victims of MVC at a level I trauma center from 1996 to 2006.^ Method. From the trauma registry, I identified patients involved in MVC's in a level I trauma center and categorized them into three age groups of 13-18, 19-55 and ≥56. I used International Classification of Disease (ICD-9-CM) codes to find the type and number of CTs examinations performed for each patient. I plotted the mean number of CTs per patient against year of admission to find the crude estimate of change in utilization pattern for each type of CT. I used logistic regression to assess whether repetitive CTs (≥ 2) for head, abdomen, thorax and other body regions were associated with age group and year of admission for MVC patients. I adjusted the estimates for gender, ethnicity, insurance status, mechanism and severity of injury, intensive care unit admission status, patient disposition (dead or alive) and year of admission.^ Results. Utilization of head, abdominal, thoracic and other body regions CTs significantly increased over 11-year period. Utilization of head CT was greatest in the 13-18 age group, and increased from 0.58 CT/patient in 1996 to 1.37 CT/patient in 2006. Abdominal CTs were more common in the ≥56+ age group, and increased from 0.33 CT/patient in 1996 to 0.72 CT/patient in 2006. Utilization of thoracic CTs was higher in the 56+ age group, and increased from 0.01 CT/patient in 1996 to 0.42 CT/patient in 2006. Utilization of other CTs did not change materially during the study period for adolescents, adults or older adults. In the multivariable analysis, after adjustment for potential confounders, repetitive head CTs significantly increased in the 13-18 age group (95% CI: 1.29-1.87, p=<0.001) relative to the 19-55 age group. Repetitive thoracic CT use was lower in adolescents (95% CI: 0.22-0.70, p=<0.001) relative to the 19-55 age group.^ Conclusion. There has been a substantial increase in the utilization of head, abdominal, thoracic and other CTs in the management of MVC patients. Future studies need to identify if increased utilization of CTs have resulted in better health outcome for these patients. ^
Resumo:
High prevalence of overweight and obesity among preschool children in the low income population is consistently documented in research with one of every seven low-income, preschool-aged children classified as obese. Parental feeding practices have the potential to be contributing factors to the obesity epidemic. However, the impact of parental feeding practices on obesity in preschool age children has not been well explored. The purpose of this study was to determine relationships between the parental feeding practices of using dessert, sweets or candy as a reward for finishing foods, restricting dessert if the child does not finish their plate at dinner, asking the child to consume everything on their plate at dinner, and having family dinners to obesity in low income, preschool age children.^ A cross-sectional secondary data analysis was completed using the STATA 11 statistical software. Descriptive statistics were completed to summarize demographic and BMI data of participants, as well as parental feeding behavior variables. Pearson’s correlation was implemented to determine a correlation between parental feeding behavior variables and BMI z scores. Predictive relationships between the variables were explored through multivariable linear regression analysis. Regression analyses were also completed factoring in the confounders of gender, age, and ethnicity.^ Results revealed (1) no significant correlations or predictive trends between the use of rewards, forced consumption, or family dinner and BMI in low income preschool age children, and (2) a significant negative correlation and predictive trend between restriction of desserts and BMI in low income preschool age children. Since the analysis supported the null hypothesis for the practices of reward use, forced consumption, and family dinner, these practices are not considered risk factors for obese level BMIs. The inverse association found for practice of restriction and BMI suggests it is unnecessary to discourage parents from using restriction. Limitations of the study included the sample size, reliability of the answers provided on the Healthy Home Survey by participant guardians, and generalizability of the sample to the larger population.^
Resumo:
The aims of the study were to determine the prevalence of and factors that affect non-adherence to first line antiretroviral (ARV) medications among HIV infected children and adolescents in Botswana. The study used secondary data from Botswana-Baylor Children's Clinical Center of Excellence for the period of June 2008 to February 10th, 2010. The study design was cross-sectional and case-comparison between non-adherent and adherent participants was used to examine the effects of socio-demographic and medication factors on non-adherence to ARV medications. A case was defined as non-adherent child with adherence level < 95% based on pill count and measurement of liquid formulations. The comparison group consisted of children with adherence levels ≥95%.^ A total of 842 participants met the eligibility criteria for determination of the prevalence of non-adherence and 338 participants (169 cases and 169 individuals) were used in the analysis to estimate the effects of factors on non-adherence. ^ Univariate and multivariable logistic regression were used to estimate the association between non-adherence (outcome) and socio-demographic and medication factors (exposures). The prevalence of non-adherence for participants on first line ARV medications was 20.0% (169/842).^ Increase in age (OR (95% CI): 1.10 (1.04–1.17) p = 0.001) was associated with nonadherence, while increase in number of caregivers (OR (95% CI): 0.72 (0.56–0.93) p = 0.01) and increase in number of monthly visits (OR (95% CI): 0.92 (0.86–0.99) p = 0.02), were associated with good adherence in both the unadjusted and the adjusted models. For the categorical variables, having more than two caregivers (OR (95% CI): 0.66 (0.28–0.84), p = 0.002) was associated with good adherence even in the adjusted model. ^ Conclusion. The prevalence of non-adherence to antiretroviral medicines among the study population was estimated to be 20.0%. In previous studies, adherence levels of ≥ 95% have been associated with better clinical outcomes and suppression of virus to prevent development of resistance. Older age, fewer numbers of caregivers and fewer monthly visits were associated with non-adherence. Strategies to improve and sustain adherence especially among older children are needed. The role of caregivers and social support should be investigated further.^
Factors associated with needle sharing among Black male injection drug users in Harris County, Texas
Resumo:
Background. Injection drug users (IDUs) are at increased risk for HIV transmission due to unique risk behaviors, such as sharing needles. In Houston, IDUs account for 18% of all HIV/AIDS cases among Black males. ^ Objectives. This analysis compared demographic, behavioral, and psychosocial characteristics of needle sharing and non-sharing IDUs in a population of Black males in Harris County, Texas. ^ Methods. Data used for this analysis were from the second IDU cycle of the National HIV Behavioral Surveillance System. This dataset included a sample of 288 Black male IDUs. Univariate and multivariate statistical analysis were performed to determine statistically significant associations of needle sharing in this population and to create a functional model to inform local HIV prevention programs. ^ Results. Half of the participants in this analysis shared needles in the past 12 months. Compared to non-sharers, sharers were more likely to be homeless (OR=3.70, p<0.01) or arrested in the past year (OR=2.31, p<0.01), inject cocaine (OR=2.07, p<0.01), report male-to-male sex in the past year (OR=6.97, p<0.01), and to exchange sex for money or drugs. Sharers were less likely than non-sharers to graduate high school (OR=0.36, p<0.01), earn $5,000 or more a year (OR=1.15, p=0.05), get needles from a medical source (OR=0.59, p=0.03), and ever test for HIV (OR=0.17, p<0.01). Sharers were more likely to report depressive symptoms (OR=3.49, p<0.01), lower scores on the family support scale (mean difference 0.41, p=0.01) and decision-making confidence scale (mean difference 0.38, p<0.01), and greater risk-taking (mean difference -0.49, p<0.01) than non-sharers. In a multivariable logistic regression, sharers were less likely to have graduated high school (OR=0.33, p<0.01) and have been tested for HIV (OR=0.12, p<0.01) and were more likely to have been arrested in the past year (OR=2.3, p<0.01), get needles from a street source (OR=3.87, p<0.01), report male-to-male sex (OR=7.01, p<0.01), and have depressive symptoms (OR=2.36, p=0.02) and increased risk-taking (OR=1.78, p=0.01). ^ Conclusions. IDUs that shared needles are different from those that did not, reporting lower socioeconomic status, increased sexual and risk behaviors, increased depressive symptoms and increased risk-taking. These findings suggest that intervention programs that also address these demographic, behavioral, and psychosocial factors may be more successful in decreasing needle sharing among this population.^
Resumo:
Genome-Wide Association Study analytical (GWAS) methods were applied in a large biracial sample of individuals to investigate variation across the genome for its association with a surrogate low-density lipoprotein (LDL) particle size phenotype, the ratio of LDL-cholesterol level over ApoB level. Genotyping was performed on the Affymetrix 6.0 GeneChip with approximately one million single nucleotide polymorphisms (SNPs). The ratio of LDL cholesterol to ApoB was calculated, and association tests used multivariable linear regression analysis with an additive genetic model after adjustment for the covariates sex, age and BMI. Association tests were performed separately in African Americans and Caucasians. There were 9,562 qualified individuals in the Caucasian group and 3,015 qualified individuals in the African American group. Overall, in Caucasians two statistically significant loci were identified as being associated with the ratio of LDL-cholesterol over ApoB: rs10488699 (p<5 x10-8, 11q23.3 near BUD13) and the SNP rs964184 (p<5 x10-8 11q23.3 near ZNF259). We also found rs12286037 ((p<4x10-7) (11q23.3) near APOA5/A4/C3/A1 with suggestive associate in the Caucasian sample. In exploratory analyses, a difference in the pattern of association between individuals taking and not taking LDL-cholesterol lowering medications was observed. Individuals who were not taking medications had smaller p-value than those taking medication. In the African-American group, there were no significant (p<5x10-8) or suggestive associations (p<4x10-7) with the ratio of LDL-cholesterol over ApoB after adjusting for age, BMI, and sex and comparing individuals with and without LDL-cholesterol lowering medication. Conclusions: There were significant and suggestive associations between SNP genotype and the ratio of LDL-cholesterol to ApoB in Caucasians, but these associations may be modified by medication treatment.^
Resumo:
Objectives. This paper seeks to assess the effect on statistical power of regression model misspecification in a variety of situations. ^ Methods and results. The effect of misspecification in regression can be approximated by evaluating the correlation between the correct specification and the misspecification of the outcome variable (Harris 2010).In this paper, three misspecified models (linear, categorical and fractional polynomial) were considered. In the first section, the mathematical method of calculating the correlation between correct and misspecified models with simple mathematical forms was derived and demonstrated. In the second section, data from the National Health and Nutrition Examination Survey (NHANES 2007-2008) were used to examine such correlations. Our study shows that comparing to linear or categorical models, the fractional polynomial models, with the higher correlations, provided a better approximation of the true relationship, which was illustrated by LOESS regression. In the third section, we present the results of simulation studies that demonstrate overall misspecification in regression can produce marked decreases in power with small sample sizes. However, the categorical model had greatest power, ranging from 0.877 to 0.936 depending on sample size and outcome variable used. The power of fractional polynomial model was close to that of linear model, which ranged from 0.69 to 0.83, and appeared to be affected by the increased degrees of freedom of this model.^ Conclusion. Correlations between alternative model specifications can be used to provide a good approximation of the effect on statistical power of misspecification when the sample size is large. When model specifications have known simple mathematical forms, such correlations can be calculated mathematically. Actual public health data from NHANES 2007-2008 were used as examples to demonstrate the situations with unknown or complex correct model specification. Simulation of power for misspecified models confirmed the results based on correlation methods but also illustrated the effect of model degrees of freedom on power.^
Resumo:
We investigated cross-sectional associations between intakes of zinc, magnesium, heme- and non heme iron, beta-carotene, vitamin C and vitamin E and inflammation and subclinical atherosclerosis in the Multi-Ethnic Study of Atherosclerosis (MESA). We also investigated prospective associations between those micronutrients and incident MetS, T2D and CVD. Participants between 45-84 years of age at baseline were followed between 2000 and 2007. Dietary intake was assessed at baseline using a 120-item food frequency questionnaire. Multivariable linear regression and Cox proportional hazard regression models were used to evaluate associations of interest. Dietary intakes of non-heme iron and Mg were inversely associated with tHcy concentrations (geometric means across quintiles: 9.11, 8.86, 8.74, 8.71, and 8.50 µmol/L for non-heme iron, and 9.20, 9.00, 8.65, 8.76, and 8.33 µmol/L for Mg; ptrends <0.001). Mg intake was inversely associated with high CC-IMT; odds ratio (95% CI) for extreme quintiles 0.76 (0.58, 1.01), ptrend: 0.002. Dietary Zn and heme-iron were positively associated with CRP (geometric means: 1.73, 1.75, 1.78, 1.88, and 1.96 mg/L for Zn and 1.72, 1.76, 1.83, 1.86, and 1.94 mg/L for heme-iron). In the prospective analysis, dietary vitamin E intake was inversely associated with incident MetS and with incident CVD (HR [CI] for extreme quintiles - MetS: 0.78 [0.62-0.97] ptrend=0.01; CVD: 0.69 [0.46-1.03]; ptrend =0.04). Intake of heme-iron from red meat and Zn from red meat, but not from other sources, were each positively associated with risk of CVD (HR [CI] - heme-iron from red meat: 1.65 [1.10-2.47] ptrend = 0.01; Zn from red meat: 1.51 [1.02 - 2.24] ptrend =0.01) and MetS (HR [CI] - heme-iron from red meat: 1.25 [0.99-1.56] ptrend =0.03; Zn from red meat: 1.29 [1.03-1.61]; ptrend = 0.04). All associations evaluated were similar across different strata of gender, race-ethnicity and alcohol intake. Most of the micronutrients investigated were not associated with the outcomes of interest in this multi-ethnic cohort. These observations do not provide consistent support for the hypothesized association of individual nutrients with inflammatory markers, MetS, T2D, or CVD. However, nutrients consumed in red meat, or consumption of red meat as a whole, may increase risk of MetS and CVD.^
Resumo:
The standard analyses of survival data involve the assumption that survival and censoring are independent. When censoring and survival are related, the phenomenon is known as informative censoring. This paper examines the effects of an informative censoring assumption on the hazard function and the estimated hazard ratio provided by the Cox model.^ The limiting factor in all analyses of informative censoring is the problem of non-identifiability. Non-identifiability implies that it is impossible to distinguish a situation in which censoring and death are independent from one in which there is dependence. However, it is possible that informative censoring occurs. Examination of the literature indicates how others have approached the problem and covers the relevant theoretical background.^ Three models are examined in detail. The first model uses conditionally independent marginal hazards to obtain the unconditional survival function and hazards. The second model is based on the Gumbel Type A method for combining independent marginal distributions into bivariate distributions using a dependency parameter. Finally, a formulation based on a compartmental model is presented and its results described. For the latter two approaches, the resulting hazard is used in the Cox model in a simulation study.^ The unconditional survival distribution formed from the first model involves dependency, but the crude hazard resulting from this unconditional distribution is identical to the marginal hazard, and inferences based on the hazard are valid. The hazard ratios formed from two distributions following the Gumbel Type A model are biased by a factor dependent on the amount of censoring in the two populations and the strength of the dependency of death and censoring in the two populations. The Cox model estimates this biased hazard ratio. In general, the hazard resulting from the compartmental model is not constant, even if the individual marginal hazards are constant, unless censoring is non-informative. The hazard ratio tends to a specific limit.^ Methods of evaluating situations in which informative censoring is present are described, and the relative utility of the three models examined is discussed. ^
Resumo:
Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^
Resumo:
The purpose of this study was to determine, for penetrating injuries (gunshot, stab) of the chest/abdomen, the impact on fatality of treatment in trauma centers and shock trauma units compared with general hospitals. Medical records of all cases of penetrating injury limited to chest/abdomen and admitted to and discharged from 7 study facilities in Baltimore city 1979-1980 (n = 581) were studied: 4 general hospitals (n = 241), 2 area-wide trauma centers (n = 298), and a shock trauma unit (n = 42). Emergency center and transferred cases were not studied. Anatomical injury severity, measured by modified Injury Severity Score (mISS), was a significant prognostic factor for death, as were cardiovascular shock (SBP $\le$ 70), injury type (gunshot vs stab), and ambulance/helicopter (vs other) transport. All deaths occurred in cases with two or more prognostic factors. Unadjusted relative risks of death compared with general hospitals were 4.3 (95% confidence interval = 2.2, 8.4) for shock trauma and 0.8 (0.4, 1.7) for trauma centers. Controlling for prognostic factors by logistic regression resulted in these relative risks: shock trauma 4.0 (0.7, 22.2), and trauma centers 0.8 (0.2, 3.2). Factors significantly associated with increased risk had the following relative risks by multiple logistic regression: SBP $\le$ 70 (RR = 40.7 (11.0, 148.7)), highest mISS (42 (7.7, 227)), gunshot (8.4 (2.1, 32.6)), and ambulance/helicopter transport (17.2 (1.3, 228.1)). Controlling for age, race, and gender did not alter results significantly. Actual deaths compared with deaths predicted from a multivariable model of general-hospital cases showed 3.7 more than predicted deaths in shock trauma (SMR = 1.6 (0.8, 2.9)) and 0.7 more than predicted deaths in area-wide trauma centers (SMR = 1.05 (0.6, 1.7)). Selection bias due to exclusion of transfers and emergency center cases, and residual confounding due to insufficient injury information, may account for persistence of adjusted high case fatality in shock trauma. Studying all cases prospectively, including emergency center and transferred cases, is needed. ^