3 resultados para b-hCG regression curve
em Duke University
Resumo:
Prostate cancer (PC) is the second leading cause of cancer death in men. Recent reports suggest that excess of nutrients involved in the one-carbon metabolism pathway increases PC risk; however, empirical data are lacking. Veteran American men (272 controls and 144 PC cases) who attended the Durham Veteran American Medical Center between 2004-2009 were enrolled into a case-control study. Intake of folate, vitamin B12, B6, and methionine were measured using a food frequency questionnaire. Regression models were used to evaluate the association among one-carbon cycle nutrients, MTHFR genetic variants, and prostate cancer. Higher dietary methionine intake was associated with PC risk (OR = 2.1; 95%CI 1.1-3.9) The risk was most pronounced in men with Gleason sum <7 (OR = 2.75; 95%CI 1.32- 5.73). The association of higher methionine intake and PC risk was only apparent in men who carried at least one MTHFR A1298C allele (OR = 6.7; 95%CI = 1.6-27.8), compared to MTHFR A1298A noncarrier men (OR = 0.9; 95%CI = 0.24-3.92) (p-interaction = 0.045). There was no evidence for associations between B vitamins (folate, B12, and B6) and PC risk. Our results suggest that carrying the MTHFR A1298C variants modifies the association between high methionine intake and PC risk. Larger studies are required to validate these findings.
Resumo:
The kinesin-like factor 1 B (KIF1B) gene plays an important role in the process of apoptosis and the transformation and progression of malignant cells. Genetic variations in KIF1B may contribute to risk of epithelial ovarian cancer (EOC). In this study of 1,324 EOC patients and 1,386 cancer-free female controls, we investigated associations between two potentially functional single nucleotide polymorphisms in KIF1B and EOC risk by the conditional logistic regression analysis. General linear regression model was used to evaluate the correlation between the number of variant alleles and KIF1B mRNA expression levels. We found that the rs17401966 variant AG/GG genotypes were significantly associated with a decreased risk of EOC (adjusted odds ratio (OR) = 0.81, 95 % confidence interval (CI) = 0.68-0.97), compared with the AA genotype, but no associations were observed for rs1002076. Women who carried both rs17401966 AG/GG and rs1002076 AG/AA genotypes of KIF1B had a 0.82-fold decreased risk (adjusted 95 % CI = 0.69-0.97), compared with others. Additionally, there was no evidence of possible interactions between about-mentioned co-variants. Further genotype-phenotype correlation analysis indicated that the number of rs17401966 variant G allele was significantly associated with KIF1B mRNA expression levels (P for GLM = 0.003 and 0.001 in all and Chinese subjects, respectively), with GG carriers having the lowest level of KIF1B mRNA expression. Taken together, the rs17401966 polymorphism likely regulates KIF1B mRNA expression and thus may be associated with EOC risk in Eastern Chinese women. Larger, independent studies are warranted to validate our findings.
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.