23 resultados para Ratio bias effect
em DigitalCommons@The Texas Medical Center
Resumo:
Of the large clinical trials evaluating screening mammography efficacy, none included women ages 75 and older. Recommendations on an upper age limit at which to discontinue screening are based on indirect evidence and are not consistent. Screening mammography is evaluated using observational data from the SEER-Medicare linked database. Measuring the benefit of screening mammography is difficult due to the impact of lead-time bias, length bias and over-detection. The underlying conceptual model divides the disease into two stages: pre-clinical (T0) and symptomatic (T1) breast cancer. Treating the time in these phases as a pair of dependent bivariate observations, (t0,t1), estimates are derived to describe the distribution of this random vector. To quantify the effect of screening mammography, statistical inference is made about the mammography parameters that correspond to the marginal distribution of the symptomatic phase duration (T1). This shows the hazard ratio of death from breast cancer comparing women with screen-detected tumors to those detected at their symptom onset is 0.36 (0.30, 0.42), indicating a benefit among the screen-detected cases. ^
Resumo:
Our objective was to determine the effect of body mass index (BMI) on response to bacterial vaginosis (BV) treatment. A secondary analysis was conducted of two multicenter trials of therapy for BV and TRICHOMONAS VAGINALIS. Gravida were screened for BV between 8 and 22 weeks and randomized between 16 and 23 weeks to metronidazole or placebo. Of 1497 gravida with asymptomatic BV and preconceptional BMI, 738 were randomized to metronidazole; BMI was divided into categories: < 25, 25 to 29.9, and > or = 30. Rates of BV persistence at follow-up were compared using the Mantel-Haenszel chi square. Multiple logistic regression was used to evaluate the effect of BMI on BV persistence at follow-up, adjusting for potential confounders. No association was identified between BMI and BV rate at follow-up ( P = 0.21). BMI was associated with maternal age, smoking, marital status, and black race. Compared with women with BMI of < 25, adjusted odds ratio (OR) of BV at follow-up were BMI 25 to 29.9: OR, 0.66, 95% CI 0.43 to 1.02; BMI > or = 30: OR, 0.83, 95% CI 0.54 to 1.26. We concluded that the persistence of BV after treatment was not related to BMI.
Resumo:
Our objective was to determine the effect of body mass index (BMI) on response to bacterial vaginosis (BV) treatment. A secondary analysis was conducted of two multicenter trials of therapy for BV and TRICHOMONAS VAGINALIS. Gravida were screened for BV between 8 and 22 weeks and randomized between 16 and 23 weeks to metronidazole or placebo. Of 1497 gravida with asymptomatic BV and preconceptional BMI, 738 were randomized to metronidazole; BMI was divided into categories: < 25, 25 to 29.9, and > or = 30. Rates of BV persistence at follow-up were compared using the Mantel-Haenszel chi square. Multiple logistic regression was used to evaluate the effect of BMI on BV persistence at follow-up, adjusting for potential confounders. No association was identified between BMI and BV rate at follow-up ( P = 0.21). BMI was associated with maternal age, smoking, marital status, and black race. Compared with women with BMI of < 25, adjusted odds ratio (OR) of BV at follow-up were BMI 25 to 29.9: OR, 0.66, 95% CI 0.43 to 1.02; BMI > or = 30: OR, 0.83, 95% CI 0.54 to 1.26. We concluded that the persistence of BV after treatment was not related to BMI.
Resumo:
This study compared four alternative approaches (Taylor, Fieller, percentile bootstrap, and bias-corrected bootstrap methods) to estimating confidence intervals (CIs) around cost-effectiveness (CE) ratio. The study consisted of two components: (1) Monte Carlo simulation was conducted to identify characteristics of hypothetical cost-effectiveness data sets which might lead one CI estimation technique to outperform another. These results were matched to the characteristics of an (2) extant data set derived from the National AIDS Demonstration Research (NADR) project. The methods were used to calculate (CIs) for data set. These results were then compared. The main performance criterion in the simulation study was the percentage of times the estimated (CIs) contained the “true” CE. A secondary criterion was the average width of the confidence intervals. For the bootstrap methods, bias was estimated. ^ Simulation results for Taylor and Fieller methods indicated that the CIs estimated using the Taylor series method contained the true CE more often than did those obtained using the Fieller method, but the opposite was true when the correlation was positive and the CV of effectiveness was high for each value of CV of costs. Similarly, the CIs obtained by applying the Taylor series method to the NADR data set were wider than those obtained using the Fieller method for positive correlation values and for values for which the CV of effectiveness were not equal to 30% for each value of the CV of costs. ^ The general trend for the bootstrap methods was that the percentage of times the true CE ratio was contained in CIs was higher for the percentile method for higher values of the CV of effectiveness, given the correlation between average costs and effects and the CV of effectiveness. The results for the data set indicated that the bias corrected CIs were wider than the percentile method CIs. This result was in accordance with the prediction derived from the simulation experiment. ^ Generally, the bootstrap methods are more favorable for parameter specifications investigated in this study. However, the Taylor method is preferred for low CV of effect, and the percentile method is more favorable for higher CV of effect. ^
Resumo:
The use of group-randomized trials is particularly widespread in the evaluation of health care, educational, and screening strategies. Group-randomized trials represent a subset of a larger class of designs often labeled nested, hierarchical, or multilevel and are characterized by the randomization of intact social units or groups, rather than individuals. The application of random effects models to group-randomized trials requires the specification of fixed and random components of the model. The underlying assumption is usually that these random components are normally distributed. This research is intended to determine if the Type I error rate and power are affected when the assumption of normality for the random component representing the group effect is violated. ^ In this study, simulated data are used to examine the Type I error rate, power, bias and mean squared error of the estimates of the fixed effect and the observed intraclass correlation coefficient (ICC) when the random component representing the group effect possess distributions with non-normal characteristics, such as heavy tails or severe skewness. The simulated data are generated with various characteristics (e.g. number of schools per condition, number of students per school, and several within school ICCs) observed in most small, school-based, group-randomized trials. The analysis is carried out using SAS PROC MIXED, Version 6.12, with random effects specified in a random statement and restricted maximum likelihood (REML) estimation specified. The results from the non-normally distributed data are compared to the results obtained from the analysis of data with similar design characteristics but normally distributed random effects. ^ The results suggest that the violation of the normality assumption for the group component by a skewed or heavy-tailed distribution does not appear to influence the estimation of the fixed effect, Type I error, and power. Negative biases were detected when estimating the sample ICC and dramatically increased in magnitude as the true ICC increased. These biases were not as pronounced when the true ICC was within the range observed in most group-randomized trials (i.e. 0.00 to 0.05). The normally distributed group effect also resulted in bias ICC estimates when the true ICC was greater than 0.05. However, this may be a result of higher correlation within the data. ^
Resumo:
Administration of gonadotropins or testosterone (T) will maintain qualitatively normal spermatogenesis and fertility in hypophysectomized (APX) rats. However, quantitative maintenance of the spermatogenic process in APX rats treated with T alone or in combination with follicle stimulating hormone (FSH) has not been demonstrated. Studies reported here were conducted to determine whether it would be possible to increase intratesticular testosterone (ITT) levels in APX rats to those found in normal animals by administration of appropriate amounts of testosterone propionate (TP) and if under these conditions spermatogenesis can be maintained quantitatively. Quantitative analysis of spermatogenesis was performed on stages VI and VII of the spermatogenic cycle utilizing criteria of Leblond and Clermont (1952) all cell types were enumerated. In a series of experiments designed to investigate the effects of T on spermatogenesis, TP was administered to 60 day old APX rats twice daily for 30 days in doses ranging from 0.6 to 15 mg/day or from 0.6 to 6.0 mg/day in combination with FSH. The results of this study demonstrate that the efficiency of transformation of type A to type B spermatogonia and the efficacy of the meiotic prophase are related to ITT levels, and that quantitatively normal completion of the reduction division requires normal ITT levels. The ratio of spermatids to spermatocytes in the vehicle-treated APX rats was 1:1.38; in the APX rats treated with 15 mg of TP it was 1:4.0 (the theoretically expected number). This study is probably the first to demonstrate: (1) the pharmacokinetics of TP, (2) the profile and quantity of T-immunoactivity in both serum and testicular tissue of APX and IC rats as well as APX rats treated with TP alone or in combination with FSH, (3) the direct correlation of serum T and ITT levels in treated APX rats (r = 0.9, p < 0.001) as well as in the IC rats (r = 0.9, p < 0.001), (4) the significant increase in the number of Type B spermatogonia, preleptotene and pachytene spermatocytes and round spermatids in TP-treated APX rats, (5) the correlation of the number of round spermatids formed in IC rats to ITT levels (r = 0.9, p < 0.001), and (6) the correlation of the quantitative maintenance of spermatogenesis with ITT levels (r = 0.7, p < 0.001) in the testes of TP-treated APX rats. These results provide direct experimental evidence for the key role of T in the spermatogenic process. ^
Resumo:
It is claimed often in the H. pylori literature that spontaneous clearance (infection loss without attempts to treat) is uncommon, though little evidence supports this claim. Emerging evidence suggests that spontaneous clearance may be frequent in young children; however, factors that determine persistence of untreated H. pylori infection in childhood are not well understood. The author hypothesized that antibiotics taken for common infections cause spontaneous clearance of H. pylori infection in children. The Pasitos Cohort Study (19982005) investigated predictors of acquisition and persistence of H. pylori infection in children from El Paso, Texas, and Juarez, Mexico, enrolled prenatally at maternal-child clinics. Children were screened for infection at target intervals of 6 months from 6-84 months of age by the 13C-urea breath test corrected for body-size-dependent variation in CO2 production. This dissertation aimed to estimate the risk of spontaneous clearance at the next test following an initial detected H. pylori infection (first detected clearance), estimate the effect of antibiotic exposure on the risk of first detected clearance (risk difference), and estimate the effect of antibiotic exposure on the rate of first detected infection (rate ratio). Data on infection status and medication history were available for 608 children followed for a mean of 3.5 years. Among 265 subjects with a first detected infection, 218 had a subsequent test, and among them, the risk of first detected clearance was 68% (95% CI: 61-74%). Children who took antibiotics during the interval between first detected infection and next test had an increased probability (risk difference of 10 percentage points) of a first detected clearance. However, there was also a similar effect of average antibiotic use >0 courses across all intervals preceding the next test. Average antibiotic exposure across all intervals preceding the first detected infection appeared to have a much stronger protective effect than interval/specific exposure when estimating incidence rate ratios (0.45 vs. 1.0). Incidental antibiotic exposure appears to influence the acquisition and duration of childhood H. pylori infection, however, given that many exposed children acquired the infection and many unexposed children cleared the infection, antibiotic exposure does not explain all infection events. ^
Resumo:
Neural tube defects (NTDs) remain elevated in Hispanic women along the South Texas Border, despite folate supplementation and folate fortification of cereal products. Missmer et al. examined the relationships between fumonisins, a class of corn mycotoxin, and NTDs in Hispanic women who ate corn tortillas and found increased odds ratios with increasing exposure, as measured by serum sphinganine:sphingosine (sa:so) ratios. This study examined the interactions between categorized maternal serum folate levels and stratified sa:so ratios and the resultant odds ratios of NTDs, stratified by type (anencephaly and spina bifida). The hypothesis was that the above normal folate category would have lower odds ratios of NTDs at given sa:so ratio categories and that there would be a difference in odds ratio patterns for anencephaly and spina bifida. Methods. Data for 406 Hispanic women were obtained from the Missmer case-control study. Sa:so ratios were calculated and subjects were stratified into “below normal,” “normal,” and above normal range for folate. A logistic regression model was applied, controlling for BMI, serum B12, lab batch, and conception date. Results. While OR’s of NTDs increased for increasing sa:so ratios, OR’s for “above normal” folate were not decreased at any sa:so ratio and there was no statistically significant difference between OR’s of anencephaly and spina bifida. Conclusion. Folate does not appear to be protective against the potential teratogenic effect of fumonisins and did not differ in effect on OR’s of NTD by type. More research is necessary to determine the extent of fumonisin exposure in Hispanic women along the South Texas Border.^
Resumo:
Background: Pancreatic cancer is the fourth most common cause of cancer death in the United States. Despite advances in cancer treatment, prognosis of pancreatic cancer remains extremely poor with survival rates of 24% and 5% in 1 and 5 years, respectively. Many patients with pancreatic cancer have a history of diabetes and are treated with various antidiabetic regimens including metformin. In multiple retrospective studies, metformin has been associated with decreased risk of cancer and cancer-related mortality. Metformin has also been reported to inhibit the growth of cancer cells, both in vitro and in vivo.^ Methods: We conducted a retrospective cohort study to examine the survival benefit of metformin in diabetic patients with pancreatic cancer at MD Anderson Cancer Center (MDACC). A dataset of 397 patients who carried the diagnosis of "Diabetes Mellitus" and "Pancreatic Cancer" at MD Anderson were screened for this study. ^ Results: Mean age of patients at diagnosis of cancer was 64.0 ± 8.7 years (range 37-84). The majority of the patients were male (65.6%) and of Caucasian race (78.5%). The most common antidiabetic regimen used were insulin and metformin (in 39.1% and 38.7%, respectively). Patients' cancer were staged as resectable in 34.1%, locally advanced unresectable in 29.1%, and disseminated disease in 36.7% of cases. Overall 1-year and 3-year survival rates for all stages combined were 51.8% and 7.6%, respectively. Earlier stage, metformin use, low CA19-9 level, better ECOG performance status, surgical intervention, negative surgical margins, and smaller tumor size were associated with longer survival. Metformin use was associated with a 33% decrease in risk of death (HR: 0.67; 95% CI: 0.51-0.88). Multivariate Cox proportional hazard regression showed hazard ratio of 1.77 (95% CI 1.49-2.10) for cancer stage, 0.65 (95% CI 0.49-0.86) for metformin use, and 1.68 (95% CI 1.26-2.23) for CA 19-9 level above population median. ^ Conclusion: Our study suggests that metformin may improve the outcome in diabetic patients with pancreatic cancer independently of other known prognostic factors. Pancreatic cancer carries extremely poor prognosis; metformin may provide a suitable adjunct therapeutic option for pancreatic cancer in patients with and without diabetes mellitus.^
Resumo:
Standardization is a common method for adjusting confounding factors when comparing two or more exposure category to assess excess risk. Arbitrary choice of standard population in standardization introduces selection bias due to healthy worker effect. Small sample in specific groups also poses problems in estimating relative risk and the statistical significance is problematic. As an alternative, statistical models were proposed to overcome such limitations and find adjusted rates. In this dissertation, a multiplicative model is considered to address the issues related to standardized index namely: Standardized Mortality Ratio (SMR) and Comparative Mortality Factor (CMF). The model provides an alternative to conventional standardized technique. Maximum likelihood estimates of parameters of the model are used to construct an index similar to the SMR for estimating relative risk of exposure groups under comparison. Parametric Bootstrap resampling method is used to evaluate the goodness of fit of the model, behavior of estimated parameters and variability in relative risk on generated sample. The model provides an alternative to both direct and indirect standardization method. ^
Resumo:
Additive and multiplicative models of relative risk were used to measure the effect of cancer misclassification and DS86 random errors on lifetime risk projections in the Life Span Study (LSS) of Hiroshima and Nagasaki atomic bomb survivors. The true number of cancer deaths in each stratum of the cancer mortality cross-classification was estimated using sufficient statistics from the EM algorithm. Average survivor doses in the strata were corrected for DS86 random error ($\sigma$ = 0.45) by use of reduction factors. Poisson regression was used to model the corrected and uncorrected mortality rates with covariates for age at-time-of-bombing, age at-time-of-death and gender. Excess risks were in good agreement with risks in RERF Report 11 (Part 2) and the BEIR-V report. Bias due to DS86 random error typically ranged from $-$15% to $-$30% for both sexes, and all sites and models. The total bias, including diagnostic misclassification, of excess risk of nonleukemia for exposure to 1 Sv from age 18 to 65 under the non-constant relative projection model was $-$37.1% for males and $-$23.3% for females. Total excess risks of leukemia under the relative projection model were biased $-$27.1% for males and $-$43.4% for females. Thus, nonleukemia risks for 1 Sv from ages 18 to 85 (DRREF = 2) increased from 1.91%/Sv to 2.68%/Sv among males and from 3.23%/Sv to 4.02%/Sv among females. Leukemia excess risks increased from 0.87%/Sv to 1.10%/Sv among males and from 0.73%/Sv to 1.04%/Sv among females. Bias was dependent on the gender, site, correction method, exposure profile and projection model considered. Future studies that use LSS data for U.S. nuclear workers may be downwardly biased if lifetime risk projections are not adjusted for random and systematic errors. (Supported by U.S. NRC Grant NRC-04-091-02.) ^
Resumo:
A life table methodology was developed which estimates the expected remaining Army service time and the expected remaining Army sick time by years of service for the United States Army population. A measure of illness impact was defined as the ratio of expected remaining Army sick time to the expected remaining Army service time. The variances of the resulting estimators were developed on the basis of current data. The theory of partial and complete competing risks was considered for each type of decrement (death, administrative separation, and medical separation) and for the causes of sick time.^ The methodology was applied to world-wide U.S. Army data for calendar year 1978. A total of 669,493 enlisted personnel and 97,704 officers were reported on active duty as of 30 September 1978. During calendar year 1978, the Army Medical Department reported 114,647 inpatient discharges and 1,767,146 sick days. Although the methodology is completely general with respect to the definition of sick time, only sick time associated with an inpatient episode was considered in this study.^ Since the temporal measure was years of Army service, an age-adjusting process was applied to the life tables for comparative purposes. Analyses were conducted by rank (enlisted and officer), race and sex, and were based on the ratio of expected remaining Army sick time to expected remaining Army service time. Seventeen major diagnostic groups, classified by the Eighth Revision, International Classification of Diseases, Adapted for Use In The United States, were ranked according to their cumulative (across years of service) contribution to expected remaining sick time.^ The study results indicated that enlisted personnel tend to have more expected hospital-associated sick time relative to their expected Army service time than officers. Non-white officers generally have more expected sick time relative to their expected Army service time than white officers. This racial differential was not supported within the enlisted population. Females tend to have more expected sick time relative to their expected Army service time than males. This tendency remained after diagnostic groups 580-629 (Genitourinary System) and 630-678 (Pregnancy and Childbirth) were removed. Problems associated with the circulatory system, digestive system and musculoskeletal system were among the three leading causes of cumulative sick time across years of service. ^
Resumo:
Evaluation of a series of deaths due to a particular disease is a frequently requested task in occupational epidemiology. There are several techniques available to determine whether a series represents an occupational health problem. Each of these techniques, however, is subject to certain limitations including cost, applicability to a given situation, feasibility relative to available resources, or potential for bias. In light of these problems, a technique was developed to estimate the standardized mortality ratio at a greatly reduced cost. The technique is demonstrated by its application in the investigation of brain cancer among employees of a large chemical company. ^
Resumo:
This study establishes the extent and relevance of bias of population estimates of prevalence, incidence, and intensity of infection with Schistosoma mansoni caused by the relative sensitivity of stool examination techniques. The population studied was Parcelas de Boqueron in Las Piedras, Puerto Rico, where the Centers for Disease Control, had undertaken a prospective community-based study of infection with S. mansoni in 1972. During each January of the succeeding years stool specimens from this population were processed according to the modified Ritchie concentration (MRC) technique. During January 1979 additional stool specimens were collected from 30 individuals selected on the basis of their mean S. mansoni egg output during previous years. Each specimen was divided into ten 1-gm aliquots and three 42-mg aliquots. The relationship of egg counts obtained with the Kato-Katz (KK) thick smear technique as a function of the mean of ten counts obtained with the MRC technique was established by means of regression analysis. Additionally, the effect of fecal sample size and egg excretion level on technique sensitivity was evaluated during a blind assessment of single stool specimen samples, using both examination methods, from 125 residents with documented S. mansoni infections. The regression equation was: Ln KK = 2.3324 + 0.6319 Ln MRC, and the coefficient of determination (r('2)) was 0.73. The regression equation was then utilized to correct the term "m" for sample size in the expression P ((GREATERTHEQ) 1 egg) = 1 - e('-ms), which estimates the probability P of finding at least one egg as a function of the mean S. mansoni egg output "m" of the population and the effective stool sample size "s" utilized by the coprological technique. This algorithm closely approximated the observed sensitivity of the KK and MRC tests when these were utilized to blindly screen a population of known parasitologic status for infection with S. mansoni. In addition, the algorithm was utilized to adjust the apparent prevalence of infection for the degree of functional sensitivity exhibited by the diagnostic test. This permitted the estimation of true prevalence of infection and, hence, a means for correcting estimates of incidence of infection. ^