15 resultados para Filmic approach methods
em DigitalCommons@The Texas Medical Center
Resumo:
The considerable search for synergistic agents in cancer research is motivated by the therapeutic benefits achieved by combining anti-cancer agents. Synergistic agents make it possible to reduce dosage while maintaining or enhancing a desired effect. Other favorable outcomes of synergistic agents include reduction in toxicity and minimizing or delaying drug resistance. Dose-response assessment and drug-drug interaction analysis play an important part in the drug discovery process, however analysis are often poorly done. This dissertation is an effort to notably improve dose-response assessment and drug-drug interaction analysis. The most commonly used method in published analysis is the Median-Effect Principle/Combination Index method (Chou and Talalay, 1984). The Median-Effect Principle/Combination Index method leads to inefficiency by ignoring important sources of variation inherent in dose-response data and discarding data points that do not fit the Median-Effect Principle. Previous work has shown that the conventional method yields a high rate of false positives (Boik, Boik, Newman, 2008; Hennessey, Rosner, Bast, Chen, 2010) and, in some cases, low power to detect synergy. There is a great need for improving the current methodology. We developed a Bayesian framework for dose-response modeling and drug-drug interaction analysis. First, we developed a hierarchical meta-regression dose-response model that accounts for various sources of variation and uncertainty and allows one to incorporate knowledge from prior studies into the current analysis, thus offering a more efficient and reliable inference. Second, in the case that parametric dose-response models do not fit the data, we developed a practical and flexible nonparametric regression method for meta-analysis of independently repeated dose-response experiments. Third, and lastly, we developed a method, based on Loewe additivity that allows one to quantitatively assess interaction between two agents combined at a fixed dose ratio. The proposed method makes a comprehensive and honest account of uncertainty within drug interaction assessment. Extensive simulation studies show that the novel methodology improves the screening process of effective/synergistic agents and reduces the incidence of type I error. We consider an ovarian cancer cell line study that investigates the combined effect of DNA methylation inhibitors and histone deacetylation inhibitors in human ovarian cancer cell lines. The hypothesis is that the combination of DNA methylation inhibitors and histone deacetylation inhibitors will enhance antiproliferative activity in human ovarian cancer cell lines compared to treatment with each inhibitor alone. By applying the proposed Bayesian methodology, in vitro synergy was declared for DNA methylation inhibitor, 5-AZA-2'-deoxycytidine combined with one histone deacetylation inhibitor, suberoylanilide hydroxamic acid or trichostatin A in the cell lines HEY and SKOV3. This suggests potential new epigenetic therapies in cell growth inhibition of ovarian cancer cells.
Resumo:
BACKGROUND: Prostate cancer mortality disparities exist among racial/ethnic groups in the United States, yet few studies have explored the spatiotemporal trend of the disease burden. To better understand mortality disparities by geographic regions over time, the present study analyzed the geographic variations of prostate cancer mortality by three Texas racial/ethnic groups over a 22-year period. METHODS: The Spatial Scan Statistic developed by Kulldorff et al was used. Excess mortality was detected using scan windows of 50% and 90% of the study period and a spatial cluster size of 50% of the population at risk. Time trend was analyzed to examine the potential temporal effects of clustering. Spatial queries were used to identify regions with multiple racial/ethnic groups having excess mortality. RESULTS: The most likely area of excess mortality for blacks occurred in Dallas-Metroplex and upper east Texas areas between 1990 and 1999; for Hispanics, in central Texas between 1992 and 1996: and for non-Hispanic whites, in the upper south and west to central Texas areas between 1990 and 1996. Excess mortality persisted among all racial/ethnic groups in the identified counties. The second scan revealed that three counties in west Texas presented an excess mortality for Hispanics from 1980-2001. Many counties bore an excess mortality burden for multiple groups. There is no time trend decline in prostate cancer mortality for blacks and non-Hispanic whites in Texas. CONCLUSION: Disparities in prostate cancer mortality among racial/ethnic groups existed in Texas. Central Texas counties with excess mortality in multiple subgroups warrant further investigation.
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
Background. This study validated the content of an instrument designed to assess the performance of the medicolegal death investigation system. The instrument was modified from Version 2.0 of the Local Public Health System Performance Assessment Instrument (CDC) and is based on the 10 Essential Public Health Services. ^ Aims. The aims were to employ a cognitive testing process to interview a randomized sample of medicolegal death investigation office leaders, qualitatively describe the results, and revise the instrument accordingly. ^ Methods. A cognitive testing process was used to validate the survey instrument's content in terms of the how well participants could respond to and interpret the questions. Twelve randomly selected medicolegal death investigation chiefs (or equivalent) that represented the seven types of medicolegal death investigation systems and six different state mandates were interviewed by telephone. The respondents also were representative of the educational diversity within medicolegal death investigation leadership. Based on respondent comments, themes were identified that permitted improvement of the instrument toward collecting valid and reliable information when ultimately used in a field survey format. ^ Results. Responses were coded and classified, which permitted the identification of themes related to Comprehension/Interpretation, Retrieval, Estimate/Judgment, and Response. The majority of respondent comments related to Comprehension/Interpretation of the questions. Respondents identified 67 questions and 6 section explanations that merited rephrasing, adding, or deleting examples or words. In addition, five questions were added based on respondent comments. ^ Conclusion. The content of the instrument was validated by cognitive testing method design. The respondents agreed that the instrument would be a useful and relevant tool for assessing system performance. ^
Resumo:
MAX dimerization protein 1 (MAD1) is a basic-helix-loop-helix transcription factors that recruits transcription repressor such as HDAC to suppress target genes transcription. It antagonizes to MYC because the promoter binding sites for MYC are usually also serve as the binding sites for MAD1 so they compete for it. However, the mechanism of the switch between MYC and MAD1 in turning on and off of genes' transcription is obscure. In this study, we demonstrated that AKT-mediated MAD1 phosphorylation inhibits MAD1 transcription repression function. The association between MAD1 and its target genes' promoter is reduced after been phosphorylated by AKT; therefore, consequently, allows MYC to occupy the binding site and activates transcription. Mutation of such phosphorylation site abrogates the inhibition from AKT. In addition, functional assays demonstrated that AKT suppressed MAD1-mediated transcription repression of its target genes hTERT and ODC. Cell cycle and cell growth were also been released from inhibition by MAD1 in the presents of AKT. Taken together, our study suggests that MAD1 is a novel substrate of AKT and AKT-mediated MAD1 phosphorylation inhibits MAD1function; therefore, activates MAD1 target genes expression. ^ Furthermore, analysis of protein-protein interaction is indispensable for current molecular biology research, but multiplex protein dynamics in cells is too complicated to be analyzed by using existing biochemical methods. To overcome the disadvantage, we have developed a single molecule level detection system with nanofluidic chip. Single molecule was analyzed based on their fluorescent profile and their profiles were plotted into 2 dimensional time co-incident photon burst diagram (2DTP). From this 2DTP, protein complexes were characterized. These results demonstrate that the nanochannel protein detection system is a promising tool for future molecular biology. ^
Resumo:
Background. Today modern day slavery is known as human trafficking and is a growing pandemic that is a grave human rights violation. Estimates suggest that 12.3 million people are working under conditions of force, fraud or coercion. Working toward eradication is a worthy effort; it would free millions of humans from slavery, mostly women and children, as well as uphold basic human rights. One tactic to eradicating human trafficking is to increase identification of victims among those likely to encounter victims of human trafficking.^ Purpose. This study aims to develop an intervention that improves certain stakeholders' ability, in the health clinic setting, to appropriately identify and report victims of human trafficking to the National Human Trafficking Resource Center.^ Methods. The Intervention Mapping (IM) process was used by program planners to develop an intervention for health professionals. This methodology is a six step process that guides program planners to develop an intervention. Each step builds on the others through the execution of a needs assessment, and the development of matrices based on performance objectives and determinants of the targeted health behavior. The end product results in an ecological, theoretical, and evidence based intervention.^ Discussion. The IM process served as a useful protocol for program planners to take an ecological approach as well as incorporate theory and evidence into the intervention. Consultation with key informants, the planning group, adopters, implementers, and individuals responsible for institutionalization also contributed to the practicality and feasibility of the intervention. Program planners believe that this intervention fully meets recommendations set forth in the literature.^ Conclusions. The intervention mapping methodology enabled program planners to develop an intervention that is appropriate and acceptable to the implementer and the recipients.^
Resumo:
This dissertation develops and tests a comparative effectiveness methodology utilizing a novel approach to the application of Data Envelopment Analysis (DEA) in health studies. The concept of performance tiers (PerT) is introduced as terminology to express a relative risk class for individuals within a peer group and the PerT calculation is implemented with operations research (DEA) and spatial algorithms. The analysis results in the discrimination of the individual data observations into a relative risk classification by the DEA-PerT methodology. The performance of two distance measures, kNN (k-nearest neighbor) and Mahalanobis, was subsequently tested to classify new entrants into the appropriate tier. The methods were applied to subject data for the 14 year old cohort in the Project HeartBeat! study.^ The concepts presented herein represent a paradigm shift in the potential for public health applications to identify and respond to individual health status. The resultant classification scheme provides descriptive, and potentially prescriptive, guidance to assess and implement treatments and strategies to improve the delivery and performance of health systems. ^
Resumo:
Most studies of differential gene-expressions have been conducted between two given conditions. The two-condition experimental (TCE) approach is simple in that all genes detected display a common differential expression pattern responsive to a common two-condition difference. Therefore, the genes that are differentially expressed under the other conditions other than the given two conditions are undetectable with the TCE approach. In order to address the problem, we propose a new approach called multiple-condition experiment (MCE) without replication and develop corresponding statistical methods including inference of pairs of conditions for genes, new t-statistics, and a generalized multiple-testing method for any multiple-testing procedure via a control parameter C. We applied these statistical methods to analyze our real MCE data from breast cancer cell lines and found that 85 percent of gene-expression variations were caused by genotypic effects and genotype-ANAX1 overexpression interactions, which agrees well with our expected results. We also applied our methods to the adenoma dataset of Notterman et al. and identified 93 differentially expressed genes that could not be found in TCE. The MCE approach is a conceptual breakthrough in many aspects: (a) many conditions of interests can be conducted simultaneously; (b) study of association between differential expressions of genes and conditions becomes easy; (c) it can provide more precise information for molecular classification and diagnosis of tumors; (d) it can save lot of experimental resources and time for investigators.^
Resumo:
Statistical methods are developed which assess survival data for two attributes; (1) prolongation of life, (2) quality of life. Health state transition probabilities correspond to prolongation of life and are modeled as a discrete-time semi-Markov process. Imbedded within the sojourn time of a particular health state are the quality of life transitions. They reflect events which differentiate perceptions of pain and suffering over a fixed time period. Quality of life transition probabilities are derived from the assumptions of a simple Markov process. These probabilities depend on the health state currently occupied and the next health state to which a transition is made. Utilizing the two forms of attributes the model has the capability to estimate the distribution of expected quality adjusted life years (in addition to the distribution of expected survival times). The expected quality of life can also be estimated within the health state sojourn time making more flexible the assessment of utility preferences. The methods are demonstrated on a subset of follow-up data from the Beta Blocker Heart Attack Trial (BHAT). This model contains the structure necessary to make inferences when assessing a general survival problem with a two dimensional outcome. ^
Resumo:
An extension of k-ratio multiple comparison methods to rank-based analyses is described. The new method is analogous to the Duncan-Godbold approximate k-ratio procedure for unequal sample sizes or correlated means. The close parallel of the new methods to the Duncan-Godbold approach is shown by demonstrating that they are based upon different parameterizations as starting points.^ A semi-parametric basis for the new methods is shown by starting from the Cox proportional hazards model, using Wald statistics. From there the log-rank and Gehan-Breslow-Wilcoxon methods may be seen as score statistic based methods.^ Simulations and analysis of a published data set are used to show the performance of the new methods. ^
Resumo:
A Bayesian approach to estimating the intraclass correlation coefficient was used for this research project. The background of the intraclass correlation coefficient, a summary of its standard estimators, and a review of basic Bayesian terminology and methodology were presented. The conditional posterior density of the intraclass correlation coefficient was then derived and estimation procedures related to this derivation were shown in detail. Three examples of applications of the conditional posterior density to specific data sets were also included. Two sets of simulation experiments were performed to compare the mean and mode of the conditional posterior density of the intraclass correlation coefficient to more traditional estimators. Non-Bayesian methods of estimation used were: the methods of analysis of variance and maximum likelihood for balanced data; and the methods of MIVQUE (Minimum Variance Quadratic Unbiased Estimation) and maximum likelihood for unbalanced data. The overall conclusion of this research project was that Bayesian estimates of the intraclass correlation coefficient can be appropriate, useful and practical alternatives to traditional methods of estimation. ^
Resumo:
The need for timely population data for health planning and Indicators of need has Increased the demand for population estimates. The data required to produce estimates is difficult to obtain and the process is time consuming. Estimation methods that require less effort and fewer data are needed. The structure preserving estimator (SPREE) is a promising technique not previously used to estimate county population characteristics. This study first uses traditional regression estimation techniques to produce estimates of county population totals. Then the structure preserving estimator, using the results produced in the first phase as constraints, is evaluated.^ Regression methods are among the most frequently used demographic methods for estimating populations. These methods use symptomatic indicators to predict population change. This research evaluates three regression methods to determine which will produce the best estimates based on the 1970 to 1980 indicators of population change. Strategies for stratifying data to improve the ability of the methods to predict change were tested. Difference-correlation using PMSA strata produced the equation which fit the data the best. Regression diagnostics were used to evaluate the residuals.^ The second phase of this study is to evaluate use of the structure preserving estimator in making estimates of population characteristics. The SPREE estimation approach uses existing data (the association structure) to establish the relationship between the variable of interest and the associated variable(s) at the county level. Marginals at the state level (the allocation structure) supply the current relationship between the variables. The full allocation structure model uses current estimates of county population totals to limit the magnitude of county estimates. The limited full allocation structure model has no constraints on county size. The 1970 county census age - gender population provides the association structure, the allocation structure is the 1980 state age - gender distribution.^ The full allocation model produces good estimates of the 1980 county age - gender populations. An unanticipated finding of this research is that the limited full allocation model produces estimates of county population totals that are superior to those produced by the regression methods. The full allocation model is used to produce estimates of 1986 county population characteristics. ^
Resumo:
Background and Objective. Ever since the human development index was published in 1990 by the United Nations Development Programme (UNDP), many researchers started searching and corporative studying for more effective methods to measure the human development. Published in 1999, Lai’s “Temporal analysis of human development indicators: principal component approach” provided a valuable statistical way on human developmental analysis. This study presented in the thesis is the extension of Lai’s 1999 research. ^ Methods. I used the weighted principal component method on the human development indicators to measure and analyze the progress of human development in about 180 countries around the world from the year 1999 to 2010. The association of the main principal component obtained from the study and the human development index reported by the UNDP was estimated by the Spearman’s rank correlation coefficient. The main principal component was then further applied to quantify the temporal changes of the human development of selected countries by the proposed Z-test. ^ Results. The weighted means of all three human development indicators, health, knowledge, and standard of living, were increased from 1999 to 2010. The weighted standard deviation for GDP per capita was also increased across years indicated the rising inequality of standard of living among countries. The ranking of low development countries by the main principal component (MPC) is very similar to that by the human development index (HDI). Considerable discrepancy between MPC and HDI ranking was found among high development countries with high GDP per capita shifted to higher ranks. The Spearman’s rank correlation coefficient between the main principal component and the human development index were all around 0.99. All the above results were very close to outcomes in Lai’s 1999 report. The Z test result on temporal analysis of main principal components from 1999 to 2010 on Qatar was statistically significant, but not on other selected countries, such as Brazil, Russia, India, China, and U.S.A.^ Conclusion. To synthesize the multi-dimensional measurement of human development into a single index, the weighted principal component method provides a good model by using the statistical tool on a comprehensive ranking and measurement. Since the weighted main principle component index is more objective because of using population of nations as weight, more effective when the analysis is across time and space, and more flexible when the countries reported to the system has been changed year after year. Thus, in conclusion, the index generated by using weighted main principle component has some advantage over the human development index created in UNDP reports.^
Resumo:
Pancreatic cancer is the 4th most common cause for cancer death in the United States, accompanied by less than 5% five-year survival rate based on current treatments, particularly because it is usually detected at a late stage. Identifying a high-risk population to launch an effective preventive strategy and intervention to control this highly lethal disease is desperately needed. The genetic etiology of pancreatic cancer has not been well profiled. We hypothesized that unidentified genetic variants by previous genome-wide association study (GWAS) for pancreatic cancer, due to stringent statistical threshold or missing interaction analysis, may be unveiled using alternative approaches. To achieve this aim, we explored genetic susceptibility to pancreatic cancer in terms of marginal associations of pathway and genes, as well as their interactions with risk factors. We conducted pathway- and gene-based analysis using GWAS data from 3141 pancreatic cancer patients and 3367 controls with European ancestry. Using the gene set ridge regression in association studies (GRASS) method, we analyzed 197 pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. Using the logistic kernel machine (LKM) test, we analyzed 17906 genes defined by University of California Santa Cruz (UCSC) database. Using the likelihood ratio test (LRT) in a logistic regression model, we analyzed 177 pathways and 17906 genes for interactions with risk factors in 2028 pancreatic cancer patients and 2109 controls with European ancestry. After adjusting for multiple comparisons, six pathways were marginally associated with risk of pancreatic cancer ( P < 0.00025): Fc epsilon RI signaling, maturity onset diabetes of the young, neuroactive ligand-receptor interaction, long-term depression (Ps < 0.0002), and the olfactory transduction and vascular smooth muscle contraction pathways (P = 0.0002; Nine genes were marginally associated with pancreatic cancer risk (P < 2.62 × 10−5), including five reported genes (ABO, HNF1A, CLPTM1L, SHH and MYC), as well as four novel genes (OR13C4, OR 13C3, KCNA6 and HNF4 G); three pathways significantly interacted with risk factors on modifying the risk of pancreatic cancer (P < 2.82 × 10−4): chemokine signaling pathway with obesity ( P < 1.43 × 10−4), calcium signaling pathway (P < 2.27 × 10−4) and MAPK signaling pathway with diabetes (P < 2.77 × 10−4). However, none of the 17906 genes tested for interactions survived the multiple comparisons corrections. In summary, our current GWAS study unveiled unidentified genetic susceptibility to pancreatic cancer using alternative methods. These novel findings provide new perspectives on genetic susceptibility to and molecular mechanisms of pancreatic cancer, once confirmed, will shed promising light on the prevention and treatment of this disease. ^
Resumo:
Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.