64 resultados para multivariable regression
em DigitalCommons@The Texas Medical Center
Resumo:
Glutathione S-transferase (GST) genes detoxify and metabolize carcinogens, including oxygen free radicals which may contribute to salivary gland carcinogenesis. This cancer center-based case-control association study included 166 patients with incident salivary gland carcinoma (SGC) and 511 cancer-free controls. We performed multiplex polymerase chain reaction-based polymorphism genotyping assays for GSTM1 and GSTT1 null genotypes. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated with multivariable logistic regression analyses adjusted for age, sex, ethnicity, tobacco use, family history of cancer, alcohol use and radiation exposure. In our results, 27.7% of the SGC cases and 20.6% of the controls were null for the GSTT1 (P = 0.054), and 53.0% of the SGC cases and 50.9% of the controls were null for the GSTM1 (P = 0.633). The results of the adjusted multivariale regression analysis suggested that having GSTT1 null genotype was associated with a significantly increased risk for SGC (odds ratio 1.5, 95% confidence interval 1.0-2.3). Additionally, 13.9% of the SGC cases but only 8.4% of the controls were null for both genes and the results of the adjusted multivariable regression analysis suggested that having both null genotypes was significantly associated with an approximately 2-fold increased risk for SGC (odds ratio 1.9, 95% confidence interval 1.0-3.5). The presence of GSTT1 null genotype and the simultaneous presence of GSTM1 and GSTT1 null genotypes appear associated with significantly increased SGC risk. These findings warrant further study with larger sample sizes.
Resumo:
Gender and racial/ethnic disparities in colorectal cancer screening (CRC) has been observed and associated with income status, education level, treatment and late diagnosis. According to the American Cancer Society, among both males and females, CRC is the third most frequently diagnosed type of cancer and accounts for 10% of cancer deaths in the United States. Differences in CRC test use have been documented and limited to access to health care, demographics and health behaviors, but few studies have examined the correlates of CRC screening test use by gender. This present study examined the prevalence of CRC screening test use and assessed whether disparities are explained by gender and racial/ethnic differences. To assess these associations, the study utilized a cross-sectional design and examined the distribution of the covariates for gender and racial/ethnic group differences using the chi square statistic. Logistic regression was used to estimate the prevalence odds ratio and to adjust for the confounding effects of the covariates. ^ Results indicated there are disparities in the use of CRC screening test use and there were statistically significant difference in the prevalence for both FOBT and endoscopy screening between gender, χ2, p≤0.003. Females had a lower prevalence of endoscopy colorectal cancer screening than males when adjusting for age and education (OR 0.88, 95% CI 0.82–0.95). However, no statistically significant difference was reported between racial/ethnic groups, χ 2 p≤0.179 after adjusting for age, education and gender. For both FOBT and endoscopy screening Non-Hispanic Blacks and Hispanics had a lower prevalence of screening compared with Non-Hispanic Whites. In the multivariable regression model, the gender disparities could largely be explained by age, income status, education level, and marital status. Overall, individuals between the age "70–79" years old, were married, with some college education and income greater than $20,000 were associated with a higher prevalence of colorectal cancer screening test use within gender and racial/ethnic groups. ^
Resumo:
The purpose of this study was to analyze the implementation of national family planning policy in the United States, which was embedded in four separate statutes during the period of study, Fiscal Years 1976-81. The design of the study utilized a modification of the Sabatier and Mazmanian framework for policy analysis, which defined implementation as the carrying out of statutory policy. The study was divided into two phases. The first part of the study compared the implementation of family planning policy by each of the pertinent statutes. The second part of the study identified factors that were associated with implementation of federal family planning policy within the context of block grants.^ Implemention was measured here by federal dollars spent for family planning, adjusted for the size of the respective state target populations. Expenditure data were collected from the Alan Guttmacher Institute and from each of the federal agencies having administrative authority for the four pertinent statutes, respectively. Data from the former were used for most of the analysis because they were more complete and more reliable.^ The first phase of the study tested the hypothesis that the coherence of a statute is directly related to effective implementation. Equity in the distribution of funds to the states was used to operationalize effective implementation. To a large extent, the results of the analysis supported the hypothesis. In addition to their theoretical significance, these findings were also significant for policymakers insofar they demonstrated the effectiveness of categorical legislation in implementing desired health policy.^ Given the current and historically intermittent emphasis on more state and less federal decision-making in health and human serives, the second phase of the study focused on state level factors that were associated with expenditures of social service block grant funds for family planning. Using the Sabatier-Mazmanian implementation model as a framework, many factors were tested. Those factors showing the strongest conceptual and statistical relationship to the dependent variable were used to construct a statistical model. Using multivariable regression analysis, this model was applied cross-sectionally to each of the years of the study. The most striking finding here was that the dominant determinants of the state spending varied for each year of the study (Fiscal Years 1976-1981). The significance of these results was that they provided empirical support of current implementation theory, showing that the dominant determinants of implementation vary greatly over time. ^
Resumo:
Numerous harmful occupational exposures affect working teens in the United States. Teens working in agriculture and other heavy-labor industries may be at risk for occupational exposures to pesticides and solvents. The neurotoxicity of pesticides and solvents at high doses is well-known; however, the long term effects of these substances at low doses on occupationally exposed adolescents have not been well-studied. To address this research gap, a secondary analysis of cross-sectional data was completed in order to estimate the prevalence of self-reported symptoms of neurotoxicity among a cohort of high school students from Starr County, Texas, a rural area along the Texas-Mexico border. Multivariable linear regression was used to estimate the association between work status (i.e., no work, farm work, and non-farm work) and symptoms of neurotoxicity, while controlling for age, gender, Spanish speaking preference, inhalant use, tobacco use, and alcohol use. The sample included 1,208 students. Of these, the majority (85.84%) did not report having worked during the prior nine months compared to 4.80% who did only farm work, 6.21% who did only non-farm work, and 3.15% who did both types of work. On average, students reported 3.26 symptoms with a range from 0-16. The most commonly endorsed items across work status were those related to memory impairment. Adolescents employed in non-farm work jobs reported more neurotoxicity symptoms than those who reported that they did not work (Mean 4.31; SD 3.97). In the adjusted multivariable regression model, adolescents reporting non-farm work status reported an average of 0.77 more neurotoxicity symptoms on the Q16 than those who did not work (P = 0.031). The confounding variables included in the final model were all found to be factors significantly associated with report of neurotoxicity symptoms. Future research should examine the relationship between these variables and self-report of symptoms of neurotoxicity.^
Resumo:
Injection drug use is the third most frequent risk factor for new HIV infections in the United States. A dual mode of exposure: unsafe drug using practices and risky sexual behaviors underlies injection drug users' (IDUs) risk for HIV infection. This research study aims to characterize patterns of drug use and sexual behaviors and to examine the social contexts associated with risk behaviors among a sample of injection drug users. ^ This cross-sectional study includes 523 eligible injection drug users from Houston, Texas, recruited into the 2009 National HIV Behavioral Surveillance project. Three separate set of analyses were carried out. First, using latent class analysis (LCA) and maximum likelihood we identified classes of behavior describing levels of HIV risk, from nine drug and sexual behaviors. Second, eight separate multivariable regression models were built to examine the odds of reporting a given risk behavior. We constructed the most parsimonious multivariable model using a manual backward stepwise process. Third, we examined whether HIV serostatus knowledge (self-reported positive, negative, or unknown serostatus) is associated with drug use and sexual HIV risk behaviors. ^ Participants were mostly male, older, and non-Hispanic Black. Forty-two percent of our sample had behaviors putting them at high risk, 25% at moderate risk, and 33% at low risk for HIV infection. Individuals in the High-risk group had the highest probability of risky behaviors, categorized as almost always sharing needles (0.93), seldom using condoms (0.10), reporting recent exchange sex partners (0.90), and practicing anal sex (0.34). We observed that unsafe injecting practices were associated with high risk sexual behaviors. IDUs who shared needles had higher odds of having anal sex (OR=2.89, 95%CI: 1.69-4.92) and unprotected sex (OR=2.66, 95%CI: 1.38-5.10) at last sex. Additionally, homelessness was associated with needle sharing (OR=2.24, 95% CI: 1.34-3.76) and cocaine use was associated with multiple sex partners (OR=1.82, 95% CI: 1.07-3.11). Furthermore, twenty-one percent of the sample was unaware of their HIV serostatus. The three groups were not different from each other in terms of drug-use behaviors: always using a new sterile needle, or in sharing needles or drug preparation equipment. However, IDUs unaware of their HIV serostatus were 33% more likely to report having more than three sexual partners in the past 12 months; 45% more likely to report to have unprotected sex and 85% more likely to use drug and or alcohol during or before at last sex compared to HIV-positive IDUs. ^ This analysis underscores the merit of LCA approach to empirically categorize injection drug users into distinct classes and identify their risk pattern using multiple indicators and our results show considerable overlap of high risk sexual and drug use behaviors among the high-risk class members. The observed clustering pattern of drug and sexual risk behavior among this population confirms that injection drug users do not represent a homogeneous population in terms of HIV risk. These findings will help develop tailored prevention programs.^
Resumo:
This study investigates the degree to which gender, ethnicity, relationship to perpetrator, and geomapped socio-economic factors significantly predict the incidence of childhood sexual abuse, physical abuse and non- abuse. These variables are then linked to geographic identifiers using geographic information system (GIS) technology to develop a geo-mapping framework for child sexual and physical abuse prevention.
Resumo:
BACKGROUND: Renal involvement is a serious manifestation of systemic lupus erythematosus (SLE); it may portend a poor prognosis as it may lead to end-stage renal disease (ESRD). The purpose of this study was to determine the factors predicting the development of renal involvement and its progression to ESRD in a multi-ethnic SLE cohort (PROFILE). METHODS AND FINDINGS: PROFILE includes SLE patients from five different United States institutions. We examined at baseline the socioeconomic-demographic, clinical, and genetic variables associated with the development of renal involvement and its progression to ESRD by univariable and multivariable Cox proportional hazards regression analyses. Analyses of onset of renal involvement included only patients with renal involvement after SLE diagnosis (n = 229). Analyses of ESRD included all patients, regardless of whether renal involvement occurred before, at, or after SLE diagnosis (34 of 438 patients). In addition, we performed a multivariable logistic regression analysis of the variables associated with the development of renal involvement at any time during the course of SLE.In the time-dependent multivariable analysis, patients developing renal involvement were more likely to have more American College of Rheumatology criteria for SLE, and to be younger, hypertensive, and of African-American or Hispanic (from Texas) ethnicity. Alternative regression models were consistent with these results. In addition to greater accrued disease damage (renal damage excluded), younger age, and Hispanic ethnicity (from Texas), homozygosity for the valine allele of FcgammaRIIIa (FCGR3A*GG) was a significant predictor of ESRD. Results from the multivariable logistic regression model that included all cases of renal involvement were consistent with those from the Cox model. CONCLUSIONS: Fcgamma receptor genotype is a risk factor for progression of renal disease to ESRD. Since the frequency distribution of FCGR3A alleles does not vary significantly among the ethnic groups studied, the additional factors underlying the ethnic disparities in renal disease progression remain to be elucidated.
Resumo:
BACKGROUND: Obesity is a systemic disorder associated with an increase in left ventricular mass and premature death and disability from cardiovascular disease. Although bariatric surgery reverses many of the hormonal and hemodynamic derangements, the long-term collective effects on body composition and left ventricular mass have not been considered before. We hypothesized that the decrease in fat mass and lean mass after weight loss surgery is associated with a decrease in left ventricular mass. METHODS: Fifteen severely obese women (mean body mass index [BMI]: 46.7+/-1.7 kg/m(2)) with medically controlled hypertension underwent bariatric surgery. Left ventricular mass and plasma markers of systemic metabolism, together with body mass index (BMI), waist and hip circumferences, body composition (fat mass and lean mass), and resting energy expenditure were measured at 0, 3, 9, 12, and 24 months. RESULTS: Left ventricular mass continued to decrease linearly over the entire period of observation, while rates of weight loss, loss of lean mass, loss of fat mass, and resting energy expenditure all plateaued at 9 [corrected] months (P <.001 for all). Parameters of systemic metabolism normalized by 9 months, and showed no further change at 24 months after surgery. CONCLUSIONS: Even though parameters of obesity, including BMI and body composition, plateau, the benefits of bariatric surgery on systemic metabolism and left ventricular mass are sustained. We propose that the progressive decrease of left ventricular mass after weight loss surgery is regulated by neurohumoral factors, and may contribute to improved long-term survival.
Resumo:
BACKGROUND: Follow-up of abnormal outpatient laboratory test results is a major patient safety concern. Electronic medical records can potentially address this concern through automated notification. We examined whether automated notifications of abnormal laboratory results (alerts) in an integrated electronic medical record resulted in timely follow-up actions. METHODS: We studied 4 alerts: hemoglobin A1c > or =15%, positive hepatitis C antibody, prostate-specific antigen > or =15 ng/mL, and thyroid-stimulating hormone > or =15 mIU/L. An alert tracking system determined whether the alert was acknowledged (ie, provider clicked on and opened the message) within 2 weeks of transmission; acknowledged alerts were considered read. Within 30 days of result transmission, record review and provider contact determined follow-up actions (eg, patient contact, treatment). Multivariable logistic regression models analyzed predictors for lack of timely follow-up. RESULTS: Between May and December 2008, 78,158 tests (hemoglobin A1c, hepatitis C antibody, thyroid-stimulating hormone, and prostate-specific antigen) were performed, of which 1163 (1.48%) were transmitted as alerts; 10.2% of these (119/1163) were unacknowledged. Timely follow-up was lacking in 79 (6.8%), and was statistically not different for acknowledged and unacknowledged alerts (6.4% vs 10.1%; P =.13). Of 1163 alerts, 202 (17.4%) arose from unnecessarily ordered (redundant) tests. Alerts for a new versus known diagnosis were more likely to lack timely follow-up (odds ratio 7.35; 95% confidence interval, 4.16-12.97), whereas alerts related to redundant tests were less likely to lack timely follow-up (odds ratio 0.24; 95% confidence interval, 0.07-0.84). CONCLUSIONS: Safety concerns related to timely patient follow-up remain despite automated notification of non-life-threatening abnormal laboratory results in the outpatient setting.
Resumo:
BACKGROUND: Given the fragmentation of outpatient care, timely follow-up of abnormal diagnostic imaging results remains a challenge. We hypothesized that an electronic medical record (EMR) that facilitates the transmission and availability of critical imaging results through either automated notification (alerting) or direct access to the primary report would eliminate this problem. METHODS: We studied critical imaging alert notifications in the outpatient setting of a tertiary care Department of Veterans Affairs facility from November 2007 to June 2008. Tracking software determined whether the alert was acknowledged (ie, health care practitioner/provider [HCP] opened the message for viewing) within 2 weeks of transmission; acknowledged alerts were considered read. We reviewed medical records and contacted HCPs to determine timely follow-up actions (eg, ordering a follow-up test or consultation) within 4 weeks of transmission. Multivariable logistic regression models accounting for clustering effect by HCPs analyzed predictors for 2 outcomes: lack of acknowledgment and lack of timely follow-up. RESULTS: Of 123 638 studies (including radiographs, computed tomographic scans, ultrasonograms, magnetic resonance images, and mammograms), 1196 images (0.97%) generated alerts; 217 (18.1%) of these were unacknowledged. Alerts had a higher risk of being unacknowledged when the ordering HCPs were trainees (odds ratio [OR], 5.58; 95% confidence interval [CI], 2.86-10.89) and when dual-alert (>1 HCP alerted) as opposed to single-alert communication was used (OR, 2.02; 95% CI, 1.22-3.36). Timely follow-up was lacking in 92 (7.7% of all alerts) and was similar for acknowledged and unacknowledged alerts (7.3% vs 9.7%; P = .22). Risk for lack of timely follow-up was higher with dual-alert communication (OR, 1.99; 95% CI, 1.06-3.48) but lower when additional verbal communication was used by the radiologist (OR, 0.12; 95% CI, 0.04-0.38). Nearly all abnormal results lacking timely follow-up at 4 weeks were eventually found to have measurable clinical impact in terms of further diagnostic testing or treatment. CONCLUSIONS: Critical imaging results may not receive timely follow-up actions even when HCPs receive and read results in an advanced, integrated electronic medical record system. A multidisciplinary approach is needed to improve patient safety in this area.
Resumo:
OBJECTIVE: To explore ethnic differences in do-not-resuscitate orders after intracerebral hemorrhage. DESIGN: Population-based surveillance. SETTING: Corpus Christi, Texas. PATIENTS: All cases of intracerebral hemorrhage in the community of Corpus Christi, TX were ascertained as part of the Brain Attack Surveillance in Corpus Christi (BASIC) project. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Medical records were reviewed for do-not-resuscitate orders. Unadjusted and multivariable logistic regression were used to test for associations between ethnicity and do-not-resuscitate orders, both overall ("any do-not-resuscitate") and within 24 hrs of presentation ("early do-not-resuscitate"), adjusted for age, gender, Glasgow Coma Scale, intracerebral hemorrhage volume, intraventricular hemorrhage, infratentorial hemorrhage, modified Charlson Index, and admission from a nursing home. A total of 270 cases of intracerebral hemorrhage from 2000-2003 were analyzed. Mexican-Americans were younger and had a higher Glasgow Coma Scale than non-Hispanic whites. Mexican-Americans were half as likely as non-Hispanic whites to have early do-not-resuscitate orders in unadjusted analysis (odds ratio 0.45, 95% confidence interval 0.27, 0.75), although this association was not significant when adjusted for age (odds ratio 0.61, 95% confidence interval 0.35, 1.06) and in the fully adjusted model (odds ratio 0.75, 95% confidence interval 0.39, 1.46). Mexican-Americans were less likely than non-Hispanic whites to have do-not-resuscitate orders written at any time point (odds ratio 0.37, 95% confidence interval 0.23, 0.61). Adjustment for age alone attenuated this relationship although it retained significance (odds ratio 0.49, 95% confidence interval 0.29, 0.82). In the fully adjusted model, Mexican-Americans were less likely than non-Hispanic whites to use do-not-resuscitate orders at any time point, although the 95% confidence interval included one (odds ratio 0.52, 95% confidence interval 0.27, 1.00). CONCLUSIONS: Mexican-Americans were less likely than non-Hispanic whites to have do-not-resuscitate orders after intracerebral hemorrhage although the association was attenuated after adjustment for age and other confounders. The persistent trend toward less frequent use of do-not-resuscitate orders in Mexican-Americans suggests that further study is warranted.
Resumo:
The adult male golden hamster, when exposed to blinding (BL), short photoperiod (SP), or daily melatonin injections (MEL) demonstrates dramatic reproductive collapse. This collapse can be blocked by removal of the pineal gland prior to treatment. Reproductive collapse is characterized by a dramatic decrease in both testicular weight and serum gonadotropin titers. The present study was designed to examine the interactions of the hypothalamus and pituitary gland during testicular regression, and to specifically compare and contrast changes caused by the three commonly employed methods of inducing testicular regression (BL,SP,MEL). Hypothalamic LHRH content was altered by all three treatments. There was an initial increase in content of LHRH that occurred concomitantly with the decreased serum gonadotropin titers, followed by a precipitous decline in LHRH content which reflected the rapid increases in both serum LH and FSH which occur during spontaneous testicular recrudescence. In vitro pituitary responsiveness was altered by all three treatments: there was a decline in basal and maximally stimulatable release of both LH and FSH which paralleled the fall of serum gonadotropins. During recrudescence both basal and maximal release dramatically increased in a manner comparable to serum hormone levels. While all three treatments were equally effective in their ability to induce changes at all levels of the endocrine system, there were important temporal differences in the effects of the various treatments. Melatonin injections induced the most rapid changes in endocrine parameters, followed by exposure to short photoperiod. Blinding required the most time to induce the same changes. This study has demonstrated that pineal-mediated testicular regression is a process which involves dynamic changes in multiply-dependent endocrine relationships, and proper evaluation of these changes must be performed with specific temporal events in mind. ^
Resumo:
Back symptoms are a major global public health problem with the lifetime prevalence ranging between 50-80%. Research suggests that work-related factors contribute to the occurrence of back pain in various industries. Despite the hazardous nature, strenuous tasks, and awkward postures associated with farm work, little is known about back injury and symptoms in farmworker adults and children. Research in the United States is particularly limited. This is a concern given the large proportion of migrant farmworkers in the United States without adequate access to healthcare as well as a substantial number of youth working in agriculture. The present study describes back symptoms and identifies work-related factors associated with back pain in migrant farmworker families and farmworker high school students from Starr County, TX. Two separate datasets were used from two cohort studies "Injury and Illness Surveillance in Migrant Farmworkers (MANOS)" (study A: n=267 families) and "South Texas Adolescent Rural Research Study (STARRS)" (study B: n=345). Descriptive and inferential statistics including multivariable logistic regression were used to identify work-related factors associated with back pain in each study. In migrant farmworker families, the prevalence of chronic back pain during the last migration season ranged from 9.5% among youngest children to 33.3% among mothers. Chronic back pain was significantly associated with increasing age; fairly bad/very bad quality of sleep while migrating; fewer than eight hours of sleep at home in Starr County, TX; depressive symptoms while migrating; self-provided water for washing hands/drinking; weeding at work; and exposure to pesticide drift/direct spray. Among farmworker adolescents, the prevalence of severe back symptoms was 15.7%. Severe back symptoms were significantly associated with being female; history of a prior accident/back injury; feeling tense, stressed, or anxious sometimes/often; lifting/carrying heavy objects not at work; current tobacco use; increasing lifetime number of migrant farmworker years; working with/around knives; and working on corn crops. Overall, results support that associations between work-related exposures and chronic back pain and severe back symptoms remain after controlling for the effect of non-work exposures in farmworker populations. ^
Resumo:
The ordinal logistic regression models are used to analyze the dependant variable with multiple outcomes that can be ranked, but have been underutilized. In this study, we describe four logistic regression models for analyzing the ordinal response variable. ^ In this methodological study, the four regression models are proposed. The first model uses the multinomial logistic model. The second is adjacent-category logit model. The third is the proportional odds model and the fourth model is the continuation-ratio model. We illustrate and compare the fit of these models using data from the survey designed by the University of Texas, School of Public Health research project PCCaSO (Promoting Colon Cancer Screening in people 50 and Over), to study the patient’s confidence in the completion colorectal cancer screening (CRCS). ^ The purpose of this study is two fold: first, to provide a synthesized review of models for analyzing data with ordinal response, and second, to evaluate their usefulness in epidemiological research, with particular emphasis on model formulation, interpretation of model coefficients, and their implications. Four ordinal logistic models that are used in this study include (1) Multinomial logistic model, (2) Adjacent-category logistic model [9], (3) Continuation-ratio logistic model [10], (4) Proportional logistic model [11]. We recommend that the analyst performs (1) goodness-of-fit tests, (2) sensitivity analysis by fitting and comparing different models.^
Resumo:
Background. Clostridium difficile is the leading cause of hospital associated infectious diarrhea and colitis. About 3 million cases of Clostridium difficile diarrhea occur each year with an annual cost of $1 billion. ^ About 20% of patients acquire C. difficile during hospitalization. Infection with Clostridium difficile can result in serious complications, posing a threat to the patient's life. ^ Purpose. The aim of this research was to demonstrate the uniqueness in the characteristics of C. difficile positive nosocomial diarrhea cases compared with C. difficile negative nosocomial diarrhea controls admitted to a local hospital. ^ Methods. One hundred and ninety patients with a positive test and one hundred and ninety with a negative test for Clostridium difficile nosocomial diarrhea, selected from patients tested between January 1, 2002 and December 31, 2003, comprised the study population. Demographic and clinical data were collected from medical records. Logistic regression analyses were conducted to determine the associated odds between selected variables and the outcome of Clostridium difficile nosocomial diarrhea. ^ Results. For the antibiotic classes, cephalosporins (OR, 1.87; CI 95, 1.23 to 2.85), penicillins (OR, 1.57; CI 95, 1.04 to 2.37), fluoroquinolones (OR, 1.65; CI 95, 1.09 to 2.48) and antifungals (OR, 2.17; CI 95, 1.20 to 3.94), were significantly associated with Clostridium difficile nosocomial diarrhea Ceftazidime (OR, 1.95; CI 95, 1.25 to 3.03, p=0.003), gatifloxacin (OR, 1.97; CI 95, 1.31 to 2.97, p=0.001), clindamycin (OR, 3.13; CI 95, 1.99 to 4.93, p<0.001) and vancomycin (OR, 1.77; CI 95, 1.18 to 2.66, p=0.006, were also significantly associated with the disease. Vancomycin was not statistically significant when analyzed in a multivariable model. Other significantly associated drugs were, antacids, laxatives, narcotics and ranitidine. Prolong use of antibiotics and an increased number of comorbid conditions were also associated with C. difficile nosocomial diarrhea. ^ Conclusion. The etiology for C. difficile diarrhea is multifactorial. Exposure to antibiotics and other drugs, prolonged antibiotic usage, the presence and severity of comorbid conditions and prolonged hospital stay were shown to contribute to the development of the disease. It is imperative that any attempt to prevent the disease, or contain its spread, be done on several fronts. ^