915 resultados para Relative risk aversion
Resumo:
Epidemiological studies have led to the hypothesis that major risk factors for developing diseases such as hypertension, cardiovascular disease and adult-onset diabetes are established during development. This developmental programming hypothesis proposes that exposure to an adverse stimulus or insult at critical, sensitive periods of development can induce permanent alterations in normal physiological processes that lead to increased disease risk later in life. For cancer, inheritance of a tumor suppressor gene defect confers a high relative risk for disease development. However, these defects are rarely 100% penetrant. Traditionally, gene-environment interactions are thought to contribute to the penetrance of tumor suppressor gene defects by facilitating or inhibiting the acquisition of additional somatic mutations required for tumorigenesis. The studies presented herein identify developmental programming as a distinctive type of gene-environment interaction that can enhance the penetrance of a tumor suppressor gene defect in adult life. Using rats predisposed to uterine leiomyoma due to a germ-line defect in one allele of the tuberous sclerosis complex 2 (Tsc-2) tumor suppressor gene, these studies show that early-life exposure to the xenoestrogen, diethylstilbestrol (DES), during development of the uterus increased tumor incidence, multiplicity and size in genetically predisposed animals, but failed to induce tumors in wild-type rats. Uterine leiomyomas are ovarian-hormone dependent tumors that develop from the uterine myometrium. DES exposure was shown to developmentally program the myometrium, causing increased expression of estrogen-responsive genes prior to the onset of tumors. Loss of function of the normal Tsc-2 allele remained the rate-limiting event for tumorigenesis; however, tumors that developed in exposed animals displayed an enhanced proliferative response to ovarian steroid hormones relative to tumors that developed in unexposed animals. Furthermore, the studies presented herein identify developmental periods during which target tissues are maximally susceptible to developmental programming. These data suggest that exposure to environmental factors during critical periods of development can permanently alter normal physiological tissue responses and thus lead to increased disease risk in genetically susceptible individuals. ^
Resumo:
Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^
Resumo:
Diethylstilbestrol (DES) exposed women are well known to be at increased risk of gynecologic cancers and infertility. Infertility may result from DES associated abnormalities in the shape of women's uteri, yet little research has addressed the effect of uterine abnormalities on risk of infertility and reproductive tract infection. Changes in uterine shape may also influence the risk of autoimmune disease and women's subsequent mental health. A sample of consenting women exposed in utero to hormone who were recruited into the DESAD project, underwent hysterosalpingogram (HSG) from 1978 to 1984. These women also completed a comprehensive health questionnaire in 1994 which included women's self-reports of chronic conditions. HSG data were used to categorize uterine shape abnormalities as arcuate shape, hypoplastic, wide lower segment, and constricted. Women were recruited from two of the four DESAD study sites in Houston (Baylor) and Minnesota (Mayo). All women were DES-exposed. Adjusted relative risk estimates were calculated comparing the range of abnormal uterine shaped to women with normal shaped uteri for each of the four outcomes: infertility, reproductive tract infection, autoimmune disease and depressive symptoms. Only the arcuate shape (n=80) was associated with a higher risk of infertility (relative risk [RR]= 1.53, 95% CI = 1.09, 2.15) as well as reproductive tract infection (RR= 1.74, 95% CI = 1.11, 2.73). In conclusion, DES-associated arcuate shaped uteri appeared to be associated with the higher risk of a reproductive tract infection and infertility while no other abnormal uterine shapes were associated with these two outcomes.^
Resumo:
This study retrospectively evaluated the spatial and temporal disease patterns associated with influenza-like illness (ILI), positive rapid influenza antigen detection tests (RIDT), and confirmed H1N1 S-OIV cases reported to the Cameron County Department of Health and Human Services between April 26 and May 13, 2009 using the space-time permutation scan statistic software SaTScan in conjunction with geographical information system (GIS) software ArcGIS 9.3. The rate and age-adjusted relative risk of each influenza measure was calculated and a cluster analysis was conducted to determine the geographic regions with statistically higher incidence of disease. A Poisson distribution model was developed to identify the effect that socioeconomic status, population density, and certain population attributes of a census block-group had on that area's frequency of S-OIV confirmed cases over the entire outbreak. Predominant among the spatiotemporal analyses of ILI, RIDT and S-OIV cases in Cameron County is the consistent pattern of a high concentration of cases along the southern border with Mexico. These findings in conjunction with the slight northward space-time shifts of ILI and RIDT cluster centers highlight the southern border as the primary site for public health interventions. Finally, the community-based multiple regression model revealed that three factors—percentage of the population under age 15, average household size, and the number of high school graduates over age 25—were significantly associated with laboratory-confirmed S-OIV in the Lower Rio Grande Valley. Together, these findings underscore the need for community-based surveillance, improve our understanding of the distribution of the burden of influenza within the community, and have implications for vaccination and community outreach initiatives.^
Resumo:
This dissertation develops and tests a comparative effectiveness methodology utilizing a novel approach to the application of Data Envelopment Analysis (DEA) in health studies. The concept of performance tiers (PerT) is introduced as terminology to express a relative risk class for individuals within a peer group and the PerT calculation is implemented with operations research (DEA) and spatial algorithms. The analysis results in the discrimination of the individual data observations into a relative risk classification by the DEA-PerT methodology. The performance of two distance measures, kNN (k-nearest neighbor) and Mahalanobis, was subsequently tested to classify new entrants into the appropriate tier. The methods were applied to subject data for the 14 year old cohort in the Project HeartBeat! study.^ The concepts presented herein represent a paradigm shift in the potential for public health applications to identify and respond to individual health status. The resultant classification scheme provides descriptive, and potentially prescriptive, guidance to assess and implement treatments and strategies to improve the delivery and performance of health systems. ^
Resumo:
Objective. The goal of this study is to characterize the current workforce of CIHs, the lengths of professional practice careers of the past and current CIHs.^ Methods. This is a secondary data analysis of data compiled from all of the nearly 50 annual roster listings of the American Board of Industrial Hygiene (ABIH) for Certified Industrial Hygienists active in each year since 1960. Survival analysis was performed as a technique to measure the primary outcome of interest. The technique which was involved in this study was the Kaplan-Meier method for estimating the survival function.^ Study subjects: The population to be studied is all Certified Industrial Hygienists (CIHs). A CIH is defined by the ABIH as an individual who has achieved the minimum requirements for education, working experience and through examination, has demonstrated a minimum level of knowledge and competency in the prevention of occupational illnesses. ^ Results. A Cox-proportional hazards model analysis was performed by different start-time cohorts of CIHs. In this model we chose cohort 1 as the reference cohort. The estimated relative risk of the event (defined as retirement, or absent from 5 consecutive years of listing) occurred for CIHs for cohorts 2,3,4,5 relative to cohort 1 is 0.385, 0.214, 0.234, 0.299 relatively. The result show that cohort 2 (CIHs issued from 1970-1980) has the lowest hazard ratio which indicates the lowest retirement rate.^ Conclusion. The manpower of CIHs (still actively practicing up to the end of 2009) increased tremendously starting in 1980 and grew into a plateau in recent decades. This indicates that the supply and demand of the profession may have reached equilibrium. More demographic information and variables are needed to actually predict the future number of CIHs needed. ^
Resumo:
Radiation therapy has been used as an effective treatment for malignancies in pediatric patients. However, in many cases, the side effects of radiation diminish these patients’ quality of life. In order to develop strategies to minimize radiogenic complications, one must first quantitatively estimate pediatric patients’ relative risk for radiogenic late effects, which has not become feasible till recently because of the calculational complexity. The goals of this work were to calculate the dose delivered to tissues and organs in pediatric patients during contemporary photon and proton radiotherapies; to estimate the corresponding risk of radiogenic second cancer and cardiac toxicity based on the calculated doses and on dose-risk models from the literature; to test for the statistical significance of the difference between predicted risks after photon versus proton radiotherapies; and to provide a prototype of an evidence-based approach to selecting treatment modalities for pediatric patients, taking second cancer and cardiac toxicity into account. The results showed that proton therapy confers a lower predicted risk of radiogenic second cancer, and lower risks of radiogenic cardiac toxicities, compared to photon therapy. An uncertainty analysis revealed that the qualitative findings of this study are insensitive to changes in a wide variety of host and treatment related factors.
Resumo:
The development of nosocomial pneumonia was monitored in 96 head-trauma patients requiring mechanical ventilation for up to 10 days. Pneumonia occurred in 28 patients (29.2%) or 53.9 cases per 1,000 admission days. The incidence of nosocomial pneumonia was negatively correlated with cerebral oxygen metabolic rate (CMRO$\sb2$) measured during the first five days. The relative risk of nosocomial pneumonia for patients with CMRO$\sb2$ less than 0.6 umol/gm/min is 2.08 (1.09$-$3.98) times those patients with CMRO$\sb2$ greater than 0.6 umol/gm/min. The association between cerebral oxygen metabolic rate and nosocomial pneumonia was not affected by adjustment of potential confounding factors including age, cimetidine and other infections. These findings provide evidences underlying the CNS-immune system interaction. ^
Resumo:
Standardization is a common method for adjusting confounding factors when comparing two or more exposure category to assess excess risk. Arbitrary choice of standard population in standardization introduces selection bias due to healthy worker effect. Small sample in specific groups also poses problems in estimating relative risk and the statistical significance is problematic. As an alternative, statistical models were proposed to overcome such limitations and find adjusted rates. In this dissertation, a multiplicative model is considered to address the issues related to standardized index namely: Standardized Mortality Ratio (SMR) and Comparative Mortality Factor (CMF). The model provides an alternative to conventional standardized technique. Maximum likelihood estimates of parameters of the model are used to construct an index similar to the SMR for estimating relative risk of exposure groups under comparison. Parametric Bootstrap resampling method is used to evaluate the goodness of fit of the model, behavior of estimated parameters and variability in relative risk on generated sample. The model provides an alternative to both direct and indirect standardization method. ^
Resumo:
Additive and multiplicative models of relative risk were used to measure the effect of cancer misclassification and DS86 random errors on lifetime risk projections in the Life Span Study (LSS) of Hiroshima and Nagasaki atomic bomb survivors. The true number of cancer deaths in each stratum of the cancer mortality cross-classification was estimated using sufficient statistics from the EM algorithm. Average survivor doses in the strata were corrected for DS86 random error ($\sigma$ = 0.45) by use of reduction factors. Poisson regression was used to model the corrected and uncorrected mortality rates with covariates for age at-time-of-bombing, age at-time-of-death and gender. Excess risks were in good agreement with risks in RERF Report 11 (Part 2) and the BEIR-V report. Bias due to DS86 random error typically ranged from $-$15% to $-$30% for both sexes, and all sites and models. The total bias, including diagnostic misclassification, of excess risk of nonleukemia for exposure to 1 Sv from age 18 to 65 under the non-constant relative projection model was $-$37.1% for males and $-$23.3% for females. Total excess risks of leukemia under the relative projection model were biased $-$27.1% for males and $-$43.4% for females. Thus, nonleukemia risks for 1 Sv from ages 18 to 85 (DRREF = 2) increased from 1.91%/Sv to 2.68%/Sv among males and from 3.23%/Sv to 4.02%/Sv among females. Leukemia excess risks increased from 0.87%/Sv to 1.10%/Sv among males and from 0.73%/Sv to 1.04%/Sv among females. Bias was dependent on the gender, site, correction method, exposure profile and projection model considered. Future studies that use LSS data for U.S. nuclear workers may be downwardly biased if lifetime risk projections are not adjusted for random and systematic errors. (Supported by U.S. NRC Grant NRC-04-091-02.) ^
Resumo:
A case-control study has been conducted examining the relationship between preterm birth and occupational physical activity among U.S. Army enlisted gravidas from 1981 to 1984. The study includes 604 cases (37 or less weeks gestation) and 6,070 controls (greater than 37 weeks gestation) treated at U.S. Army medical treatment facilities worldwide. Occupational physical activity was measured using existing physical demand ratings of military occupational specialties.^ A statistically significant trend of preterm birth with increasing physical demand level was found (p = 0.0056). The relative risk point estimates for the two highest physical demand categories were statistically significant, RR's = 1.69 (p = 0.02) and 1.75 (p = 0.01), respectively. Six of eleven additional variables were also statistically significant predictors of preterm birth: age (less than 20), race (non-white), marital status (single, never married), paygrade (E1 - E3), length of military service (less than 2 years), and aptitude score (less than 100).^ Multivariate analyses using the logistic model resulted in three statistically significant risk factors for preterm birth: occupational physical demand; lower paygrade; and non-white race. Controlling for race and paygrade, the two highest physical demand categories were again statistically significant with relative risk point estimates of 1.56 and 1.70, respectively. The population attributable risk for military occupational physical demand was 26%, adjusted for paygrade and race; 17.5% of the preterm births were attributable to the two highest physical demand categories. ^
Resumo:
Traditional comparison of standardized mortality ratios (SMRs) can be misleading if the age-specific mortality ratios are not homogeneous. For this reason, a regression model has been developed which incorporates the mortality ratio as a function of age. This model is then applied to mortality data from an occupational cohort study. The nature of the occupational data necessitates the investigation of mortality ratios which increase with age. These occupational data are used primarily to illustrate and develop the statistical methodology.^ The age-specific mortality ratio (MR) for the covariates of interest can be written as MR(,ij...m) = ((mu)(,ij...m)/(theta)(,ij...m)) = r(.)exp (Z('')(,ij...m)(beta)) where (mu)(,ij...m) and (theta)(,ij...m) denote the force of mortality in the study and chosen standard populations in the ij...m('th) stratum, respectively, r is the intercept, Z(,ij...m) is the vector of covariables associated with the i('th) age interval, and (beta) is a vector of regression coefficients associated with these covariables. A Newton-Raphson iterative procedure has been used for determining the maximum likelihood estimates of the regression coefficients.^ This model provides a statistical method for a logical and easily interpretable explanation of an occupational cohort mortality experience. Since it gives a reasonable fit to the mortality data, it can also be concluded that the model is fairly realistic. The traditional statistical method for the analysis of occupational cohort mortality data is to present a summary index such as the SMR under the assumption of constant (homogeneous) age-specific mortality ratios. Since the mortality ratios for occupational groups usually increase with age, the homogeneity assumption of the age-specific mortality ratios is often untenable. The traditional method of comparing SMRs under the homogeneity assumption is a special case of this model, without age as a covariate.^ This model also provides a statistical technique to evaluate the relative risk between two SMRs or a dose-response relationship among several SMRs. The model presented has application in the medical, demographic and epidemiologic areas. The methods developed in this thesis are suitable for future analyses of mortality or morbidity data when the age-specific mortality/morbidity experience is a function of age or when there is an interaction effect between confounding variables needs to be evaluated. ^
Resumo:
Context: Black women are reported to have a higher prevalence of uterine fibroids, and a threefold higher incidence rate and relative risk for clinical uterine fibroid development as compared to women of other races. Uterine fibroid research has reported that black women experience greater uterine fibroid morbidity and disproportionate uterine fibroid disease burden. With increased interest in understanding uterine fibroid development, and race being a critical component of uterine fibroid assessment, it is imperative that the methods used to determine the race of research participants is defined and the operational definition of the use of race as a variable is reported for methodological guidance, and to enable the research community to compare statistical data and replicate studies. ^ Objectives: To systematically review and evaluate the methods used to assess race and racial disparities in uterine fibroid research. ^ Data Sources: Databases searched for this review include: OVID Medline, NML PubMed, Ebscohost Cumulative Index to Nursing and Allied Health Plus with Full Text, and Elsevier Scopus. ^ Review Methods: Articles published in English were retrieved from data sources between January 2011 and March 2011. Broad search terms, uterine fibroids and race, were employed to retrieve a comprehensive list of citations for review screening. The initial database yield included 947 articles, after duplicate extraction 485 articles remained. In addition, 771 bibliographic citations were reviewed to identify additional articles not found through the primary database search, of which 17 new articles were included. In the first screening, 502 titles and abstracts were screened against eligibility questions to determine citations of exclusion and to retrieve full text articles for review. In the second screening, 197 full texted articles were screened against eligibility questions to determine whether or not they met full inclusion/exclusion criteria. ^ Results: 100 articles met inclusion criteria and were used in the results of this systematic review. The evidence suggested that black women have a higher prevalence of uterine fibroids when compared to white women. None of the 14 studies reporting data on prevalence reported an operational definition or conceptual framework for the use of race. There were a limited number of studies reporting on the prevalence of risk factors among racial subgroups. Of the 3 studies, 2 studies reported prevalence of risk factors lower for black women than other races, which was contrary to hypothesis. And, of the three studies reporting on prevalence of risk factors among racial subgroups, none of them reported a conceptual framework for the use of race. ^ Conclusion: In the 100 uterine fibroid studies included in this review over half, 66%, reported a specific objective to assess and recruit study participants based upon their race and/or ethnicity, but most, 51%, failed to report a method of determining the actual race of the participants, and far fewer, 4% (only four South American studies), reported a conceptual framework and/or operational definition of race as a variable. However, most, 95%, of all studies reported race-based health outcomes. The inadequate methodological guidance on the use of race in uterine fibroid studies, purporting to assess race and racial disparities, may be a primary reason that uterine fibroid research continues to report racial disparities, but fails to understand the high prevalence and increased exposures among African-American women. A standardized method of assessing race throughout uterine fibroid research would appear to be helpful in elucidating what race is actually measuring, and the risk of exposures for that measurement. ^
Resumo:
Results from epidemiologic studies suggest that persons working in occupations with presumed electric and magnetic field (EMF) exposures are at increased risk of brain cancer. This study utilized data from a completed, population-based, interview case-control study of central nervous system (CNS) tumors and employment in the petrochemical industry to test the hypothesis that employment in EMF-related occupations increases CNS tumor risk. A total of 375 male residents of the Texas-Louisiana Gulf Coast Area, age 20 to 79, with primary neuroglial CNS tumors diagnosed during the period 1980-84 were identified. A population-based comparison group of 450 age, race and geographically matched males was selected. Occupational histories and potential risk factor data were collected via personal interviews with study subjects or their next-of-kin.^ Adjusted odds ratios were less than 1.0 for persons ever employed in an electrical occupation (OR = 0.65; 95% CI = 0.40-1.09) or whose usual occupation was electrical (OR = 0.76; 95% CI = 0.33-1.73). Relative risk estimates did not increase significantly as time since first employment or duration of employment increased. Examination of CNS tumor risk by high (OR = 0.80), medium (OR = 0.88) and low (OR = 0.45) exposure categories for persons whose usual occupation was electrical did not indicate a dose-response pattern. In addition, the mean age of exposed cases was not significantly younger than that for unexposed cases. Analysis of risk by probability of exposure to EMFs showed non-significant elevations in the adjusted odds ratio for definite exposed workers defined by their usual occupation (OR = 1.78; 95% CI = 0.70-4.51) and ever/never employed status (OR = 1.54; 95% CI = 0.17-4.91).^ These findings suggest that employment in occupations with presumed EMF exposures does not increase CNS tumor risk as was suggested by previous investigations. The results of this study also do not support the EMF-tumor promotion hypothesis. ^
Resumo:
It has been well documented that inmates incarcerated in prisons and correctional facilities exhibit higher incidence and prevalence of mycobacterium tuberculosis (TB) disease than the general population. This has public health implications because correctional systems may serve as reservoirs for TB disease that can lead to TB outbreaks in the facilities or can be spread to the general public once inmates are released. Although Texas has one of the largest correctional systems in both the US and the world, little is known about TB prevalence and incidence among Texas inmates. The purpose of this study was to elucidate the relationship between TB incidence and incarceration in Texas correctional facilities and investigate differences in various demographic factors. ^ The study used the national TB database from the US Centers for Disease Control and Prevention (CDC) to calculate and compare the overall incidences of TB disease among correctional facility inmates and similar non-inmates in Texas during 2005–2009. Data were also stratified by age, gender, race/ethnicity, birth status, and HIV status and compared between inmates and non-inmates using chi-squared analysis and relative risks with 95% confidence intervals to assess any significant differences. ^ Results suggest that the overall TB incidence among Texas correctional facility inmates per year (88.6 per 100,000) was significantly higher than that of Texas non-inmates (6.3 per 100,000); a 14 fold difference. Relative risk analyses by gender, race/ethnicity, and those with HIV infection found that the TB incidences for all these demographics were significantly and consistently higher in inmates compared to non-inmates. In particular, Hispanic inmates were more likely to develop TB than their non-inmate counterparts by a relative risk of 23.9 (95% CI 19.4–29.4). Likewise, both male and female inmates were more likely to develop TB than non-inmates (RR = 10.2, 95% CI 8.5–12.2; RR = 20.8, 95% CI 12.2–25.3, respectively), although female inmates unconventionally exhibited a higher TB incidence and relative risk than males inmates, which has not been shown. Among those with HIV infections, correctional facility inmates were 2.6 times were likely to develop TB disease than non-inmates (95% CI 1.5–4.4). ^ Inmates in Texas correctional facilities have a higher incidence of TB than non-inmates. Part of this higher risk may be because a large proportion of inmates come from populations already at high risks for TB, such as foreign born immigrants, those infected with HIV, and low SES groups such as many racial/ethnic minorities. Thus, these results may be used as a basis for more controlled and detailed research in the area, and to further characterize incarceration as a risk factor for TB incidence. They may also bring much needed attention about this health disparity to public health officials, legislators, and health administrators to expand and improve TB control in Texas correctional facilities, particularly among inmates released to the community, and reduce the risk of TB transmission to the general population.^