9 resultados para competing risks model

em DigitalCommons@The Texas Medical Center


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The relationship between serum cholesterol and cancer incidence was investigated in the population of the Hypertension Detection and Follow-up Program (HDFP). The HDFP was a multi-center trial designed to test the effectiveness of a stepped program of medication in reducing mortality associated with hypertension. Over 10,000 participants, ages 30-69, were followed with clinic and home visits for a minimum of five years. Cancer incidence was ascertained from existing study documents, which included hospitalization records, autopsy reports and death certificates. During the five years of follow-up, 286 new cancer cases were documented. The distribution of sites and total number of cases were similar to those predicted using rates from the Third National Cancer Survey. A non-fasting baseline serum cholesterol level was available for most participants. Age, sex, and race specific five-year cancer incidence rates were computed for each cholesterol quartile. Rates were also computed by smoking status, education status, and percent ideal weight quartiles. In addition, these and other factors were investigated with the use of the multiple logistic model.^ For all cancers combined, a significant inverse relationship existed between baseline serum cholesterol levels and cancer incidence. Previously documented associations between smoking, education and cancer were also demonstrated but did not account for the relationship between serum cholesterol and cancer. The relationship was more evident in males than females but this was felt to represent the different distribution of occurrence of specific cancer sites in the two sexes. The inverse relationship existed for all specific sites investigated (except breast) although a level of statistical significance was reached only for prostate carcinoma. Analyses after exclusion of cases diagnosed during the first two years of follow-up still yielded an inverse relationship. Life table analysis indicated that competing risks during the period of follow-up did not account for the existence of an inverse relationship. It is concluded that a weak inverse relationship does exist between serum cholesterol for many but not all cancer sites. This relationship is not due to confounding by other known cancer risk factors, competing risks or persons entering the study with undiagnosed cancer. Not enough information is available at the present time to determine whether this relationship is causal and further research is suggested. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A life table methodology was developed which estimates the expected remaining Army service time and the expected remaining Army sick time by years of service for the United States Army population. A measure of illness impact was defined as the ratio of expected remaining Army sick time to the expected remaining Army service time. The variances of the resulting estimators were developed on the basis of current data. The theory of partial and complete competing risks was considered for each type of decrement (death, administrative separation, and medical separation) and for the causes of sick time.^ The methodology was applied to world-wide U.S. Army data for calendar year 1978. A total of 669,493 enlisted personnel and 97,704 officers were reported on active duty as of 30 September 1978. During calendar year 1978, the Army Medical Department reported 114,647 inpatient discharges and 1,767,146 sick days. Although the methodology is completely general with respect to the definition of sick time, only sick time associated with an inpatient episode was considered in this study.^ Since the temporal measure was years of Army service, an age-adjusting process was applied to the life tables for comparative purposes. Analyses were conducted by rank (enlisted and officer), race and sex, and were based on the ratio of expected remaining Army sick time to expected remaining Army service time. Seventeen major diagnostic groups, classified by the Eighth Revision, International Classification of Diseases, Adapted for Use In The United States, were ranked according to their cumulative (across years of service) contribution to expected remaining sick time.^ The study results indicated that enlisted personnel tend to have more expected hospital-associated sick time relative to their expected Army service time than officers. Non-white officers generally have more expected sick time relative to their expected Army service time than white officers. This racial differential was not supported within the enlisted population. Females tend to have more expected sick time relative to their expected Army service time than males. This tendency remained after diagnostic groups 580-629 (Genitourinary System) and 630-678 (Pregnancy and Childbirth) were removed. Problems associated with the circulatory system, digestive system and musculoskeletal system were among the three leading causes of cumulative sick time across years of service. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evaluation of the impact of a disease on life expectancy is an important part of public health. Potential gains in life expectancy (PGLE) that can properly take into account the competing risks are an effective indicator for measuring the impact of the multiple causes of death. This study aimed to measure the PGLEs from reducing/eliminating the major causes of death in the USA from 2001 to 2008. To calculate the PGLEs due to the elimination of specific causes of death, the age-specific mortality rates for heart disease, malignant neoplasms, Alzheimer disease, kidney diseases and HIV/AIDS and life table constructing data were obtained from the National Center for Health Statistics, and the multiple decremental life tables were constructed. The PGLEs by elimination of heart disease, malignant neoplasms or HIV/AIDS continued decreasing from 2001 to 2008, but the PGLE by elimination of Alzheimer's disease or kidney diseases revealed increased trends. The PGLEs (by years) for all race, male, female, white, white male, white female, black, black male and black female at birth by complete elimination of heart disease 2001–2008 were 0.336–0.299, 0.327–0.301, 0.344–0.295, 0.360–0.315, 0.349–0.317, 0.371–0.316,0.278–0.251, 0.272–0.255, and 0.282–0.246 respectively. Similarly, the PGLEs (by years) for all race, male, female, white, white male, white female, black, black male and black female at birth by complete elimination of malignant neoplasms, Alzheimer's disease, kidney disease or HIV/AIDS 2001–2008 were also uncovered, respectively. Most diseases affect specific population, such as, HIV/AIDS tends to have a greater impact on people of working age, heart disease and malignant neoplasms have a greater impact on people over 65 years of age, but Alzheimer's disease and kidney diseases have a greater impact on people over 75 years of age. To measure the impact of these diseases on life expectancy in people of working age, partial multiple decremental life tables were constructed and the PGLEs were computed by partial or complete elimination of various causes of death during the working years. Thus, the results of the study outlined a picture of how each single disease could affect the life expectancy in age-, race-, or sex-specific population in USA. Therefore, the findings would not only assist to evaluate current public health improvements, but also provide useful information for future research and disease control programs.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines the relationship among psychological resources (generalized resistance resources), care demands (demands for care, competing demands, perception of burden) and cognitive stress in a selected population of primary family caregivers. The study utilizes Antonovsky's Salutogenic Model of Health, specifically the concept of generalized resistance resources (GRRs), to analyze the relative effect of these resources on mediating cognitive stress, controlling for other care demands. The study is based on a sample of 784 eligible caregivers who (1) were relatives, (2) had the main responsibility for care, defined as a primary caregiver, and (3) provided a scaled stress score for the amount of overall care given to the care recipient (family member). The sample was drawn from the 1982 National Long-Term Care Survey (NLTCS) of individuals who assisted a given NLTCS sample person with ADL limitations.^ The study tests the following hypotheses: (a) There will be a negative relationship between generalized resistance resources (GRRs) and cognitive stress controlling for care demands (demands for care, competing demands, and perceptions of burden); (b) of the specific GRRs (material, cognitive, social, cultural-environmental) the social domain will represent the most significant factor predicting a decrease in cognitive stress; and (c) the social domain will be more significant for the female than the male primary family caregiver in decreasing cognitive stress.^ The study found that GRRs had a statistically significant mediating effect on cognitive stress, but the GRRs were a less significant predictor of stress than perception of burden and demands for care. Thus, although the analysis supported the underlying hypothesis, the specific hypothesis regarding GRRs' greater significance in buffering cognitive stress was not supported. Second, the results did not demonstrate the statistical significance or differences among the GRR domains. The hypothesis that the social GRR domain was most significant in mediating stress of family caregivers was not supported. Finally, the results confirmed that there are differences in the importance of social support help in mediating stress based on gender. It was found that gender and social support help were related to cognitive stress and gender had a statistically significant interaction effect with social support help. Implications for clinical practice, public health policy, and research are discussed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

External beam radiation therapy is used to treat nearly half of the more than 200,000 new cases of prostate cancer diagnosed in the United States each year. During a radiation therapy treatment, healthy tissues in the path of the therapeutic beam are exposed to high doses. In addition, the whole body is exposed to a low-dose bath of unwanted scatter radiation from the pelvis and leakage radiation from the treatment unit. As a result, survivors of radiation therapy for prostate cancer face an elevated risk of developing a radiogenic second cancer. Recently, proton therapy has been shown to reduce the dose delivered by the therapeutic beam to normal tissues during treatment compared to intensity modulated x-ray therapy (IMXT, the current standard of care). However, the magnitude of stray radiation doses from proton therapy, and their impact on this incidence of radiogenic second cancers, was not known. ^ The risk of a radiogenic second cancer following proton therapy for prostate cancer relative to IMXT was determined for 3 patients of large, median, and small anatomical stature. Doses delivered to healthy tissues from the therapeutic beam were obtained from treatment planning system calculations. Stray doses from IMXT were taken from the literature, while stray doses from proton therapy were simulated using a Monte Carlo model of a passive scattering treatment unit and an anthropomorphic phantom. Baseline risk models were taken from the Biological Effects of Ionizing Radiation VII report. A sensitivity analysis was conducted to characterize the uncertainty of risk calculations to uncertainties in the risk model, the relative biological effectiveness (RBE) of neutrons for carcinogenesis, and inter-patient anatomical variations. ^ The risk projections revealed that proton therapy carries a lower risk for radiogenic second cancer incidence following prostate irradiation compared to IMXT. The sensitivity analysis revealed that the results of the risk analysis depended only weakly on uncertainties in the risk model and inter-patient variations. Second cancer risks were sensitive to changes in the RBE of neutrons. However, the findings of the study were qualitatively consistent for all patient sizes and risk models considered, and for all neutron RBE values less than 100. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The type 2 diabetes (diabetes) pandemic is recognized as a threat to tuberculosis (TB) control worldwide. This secondary data analysis project estimated the contribution of diabetes to TB in a binational community on the Texas-Mexico border where both diseases occur. Newly-diagnosed TB patients > 20 years of age were prospectively enrolled at Texas-Mexico border clinics between January 2006 and November 2008. Upon enrollment, information regarding social, demographic, and medical risks for TB was collected at interview, including self-reported diabetes. In addition, self-reported diabetes was supported by blood-confirmation according to guidelines published by the American Diabetes Association (ADA). For this project, data was compared to existing statistics for TB incidence and diabetes prevalence from the corresponding general populations of each study site to estimate the relative and attributable risks of diabetes to TB. In concordance with historical sociodemographic data provided for TB patients with self-reported diabetes, our TB patients with diabetes also lacked the risk factors traditionally associated with TB (alcohol abuse, drug abuse, history of incarceration, and HIV infection); instead, the majority of our TB patients with diabetes were characterized by overweight/obesity, chronic hyperglycemia, and older median age. In addition, diabetes prevalence among our TB patients was significantly higher than in the corresponding general populations. Findings of this study will help accurately characterize TB patients with diabetes, thus aiding in the timely recognition and diagnosis of TB in a population not traditionally viewed as at-risk. We provide epidemiological and biological evidence that diabetes continues to be an increasingly important risk factor for TB.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Additive and multiplicative models of relative risk were used to measure the effect of cancer misclassification and DS86 random errors on lifetime risk projections in the Life Span Study (LSS) of Hiroshima and Nagasaki atomic bomb survivors. The true number of cancer deaths in each stratum of the cancer mortality cross-classification was estimated using sufficient statistics from the EM algorithm. Average survivor doses in the strata were corrected for DS86 random error ($\sigma$ = 0.45) by use of reduction factors. Poisson regression was used to model the corrected and uncorrected mortality rates with covariates for age at-time-of-bombing, age at-time-of-death and gender. Excess risks were in good agreement with risks in RERF Report 11 (Part 2) and the BEIR-V report. Bias due to DS86 random error typically ranged from $-$15% to $-$30% for both sexes, and all sites and models. The total bias, including diagnostic misclassification, of excess risk of nonleukemia for exposure to 1 Sv from age 18 to 65 under the non-constant relative projection model was $-$37.1% for males and $-$23.3% for females. Total excess risks of leukemia under the relative projection model were biased $-$27.1% for males and $-$43.4% for females. Thus, nonleukemia risks for 1 Sv from ages 18 to 85 (DRREF = 2) increased from 1.91%/Sv to 2.68%/Sv among males and from 3.23%/Sv to 4.02%/Sv among females. Leukemia excess risks increased from 0.87%/Sv to 1.10%/Sv among males and from 0.73%/Sv to 1.04%/Sv among females. Bias was dependent on the gender, site, correction method, exposure profile and projection model considered. Future studies that use LSS data for U.S. nuclear workers may be downwardly biased if lifetime risk projections are not adjusted for random and systematic errors. (Supported by U.S. NRC Grant NRC-04-091-02.) ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ACCURACY OF THE BRCAPRO RISK ASSESSMENT MODEL IN MALES PRESENTING TO MD ANDERSON FOR BRCA TESTING Publication No. _______ Carolyn A. Garby, B.S. Supervisory Professor: Banu Arun, M.D. Hereditary Breast and Ovarian Cancer (HBOC) syndrome is due to mutations in BRCA1 and BRCA2 genes. Women with HBOC have high risks to develop breast and ovarian cancers. Males with HBOC are commonly overlooked because male breast cancer is rare and other male cancer risks such as prostate and pancreatic cancers are relatively low. BRCA genetic testing is indicated for men as it is currently estimated that 4-40% of male breast cancers result from a BRCA1 or BRCA2 mutation (Ottini, 2010) and management recommendations can be made based on genetic test results. Risk assessment models are available to provide the individualized likelihood to have a BRCA mutation. Only one study has been conducted to date to evaluate the accuracy of BRCAPro in males and was based on a cohort of Italian males and utilized an older version of BRCAPro. The objective of this study is to determine if BRCAPro5.1 is a valid risk assessment model for males who present to MD Anderson Cancer Center for BRCA genetic testing. BRCAPro has been previously validated for determining the probability of carrying a BRCA mutation, however has not been further examined particularly in males. The total cohort consisted of 152 males who had undergone BRCA genetic testing. The cohort was stratified by indication for genetic counseling. Indications included having a known familial BRCA mutation, having a personal diagnosis of a BRCA-related cancer, or having a family history suggestive of HBOC. Overall there were 22 (14.47%) BRCA1+ males and 25 (16.45%) BRCA2+ males. Receiver operating characteristic curves were constructed for the cohort overall, for each particular indication, as well as for each cancer subtype. Our findings revealed that the BRCAPro5.1 model had perfect discriminating ability at a threshold of 56.2 for males with breast cancer, however only 2 (4.35%) of 46 were found to have BRCA2 mutations. These results are significantly lower than the high approximation (40%) reported in previous literature. BRCAPro does perform well in certain situations for men. Future investigation of male breast cancer and men at risk for BRCA mutations is necessary to provide a more accurate risk assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conservative procedures in low-dose risk assessment are used to set safety standards for known or suspected carcinogens. However, the assumptions upon which the methods are based and the effects of these methods are not well understood.^ To minimize the number of false-negatives and to reduce the cost of bioassays, animals are given very high doses of potential carcinogens. Results must then be extrapolated to much smaller doses to set safety standards for risks such as one per million. There are a number of competing methods that add a conservative safety factor into these calculations.^ A method of quantifying the conservatism of these methods was described and tested on eight procedures used in setting low-dose safety standards. The results using these procedures were compared by computer simulation and by the use of data from a large scale animal study.^ The method consisted of determining a "true safe dose" (tsd) according to an assumed underlying model. If one assumed that Y = the probability of cancer = P(d), a known mathematical function of the dose, then by setting Y to some predetermined acceptable risk, one can solve for d, the model's "true safe dose".^ Simulations were generated, assuming a binomial distribution, for an artificial bioassay. The eight procedures were then used to determine a "virtual safe dose" (vsd) that estimates the tsd, assuming a risk of one per million. A ratio R = ((tsd-vsd)/vsd) was calculated for each "experiment" (simulation). The mean R of 500 simulations and the probability R $<$ 0 was used to measure the over and under conservatism of each procedure.^ The eight procedures included Weil's method, Hoel's method, the Mantel-Byran method, the improved Mantel-Byran, Gross's method, fitting a one-hit model, Crump's procedure, and applying Rai and Van Ryzin's method to a Weibull model.^ None of the procedures performed uniformly well for all types of dose-response curves. When the data were linear, the one-hit model, Hoel's method, or the Gross-Mantel method worked reasonably well. However, when the data were non-linear, these same methods were overly conservative. Crump's procedure and the Weibull model performed better in these situations. ^