7 resultados para WSN low-power networking
em DigitalCommons@The Texas Medical Center
Resumo:
The considerable search for synergistic agents in cancer research is motivated by the therapeutic benefits achieved by combining anti-cancer agents. Synergistic agents make it possible to reduce dosage while maintaining or enhancing a desired effect. Other favorable outcomes of synergistic agents include reduction in toxicity and minimizing or delaying drug resistance. Dose-response assessment and drug-drug interaction analysis play an important part in the drug discovery process, however analysis are often poorly done. This dissertation is an effort to notably improve dose-response assessment and drug-drug interaction analysis. The most commonly used method in published analysis is the Median-Effect Principle/Combination Index method (Chou and Talalay, 1984). The Median-Effect Principle/Combination Index method leads to inefficiency by ignoring important sources of variation inherent in dose-response data and discarding data points that do not fit the Median-Effect Principle. Previous work has shown that the conventional method yields a high rate of false positives (Boik, Boik, Newman, 2008; Hennessey, Rosner, Bast, Chen, 2010) and, in some cases, low power to detect synergy. There is a great need for improving the current methodology. We developed a Bayesian framework for dose-response modeling and drug-drug interaction analysis. First, we developed a hierarchical meta-regression dose-response model that accounts for various sources of variation and uncertainty and allows one to incorporate knowledge from prior studies into the current analysis, thus offering a more efficient and reliable inference. Second, in the case that parametric dose-response models do not fit the data, we developed a practical and flexible nonparametric regression method for meta-analysis of independently repeated dose-response experiments. Third, and lastly, we developed a method, based on Loewe additivity that allows one to quantitatively assess interaction between two agents combined at a fixed dose ratio. The proposed method makes a comprehensive and honest account of uncertainty within drug interaction assessment. Extensive simulation studies show that the novel methodology improves the screening process of effective/synergistic agents and reduces the incidence of type I error. We consider an ovarian cancer cell line study that investigates the combined effect of DNA methylation inhibitors and histone deacetylation inhibitors in human ovarian cancer cell lines. The hypothesis is that the combination of DNA methylation inhibitors and histone deacetylation inhibitors will enhance antiproliferative activity in human ovarian cancer cell lines compared to treatment with each inhibitor alone. By applying the proposed Bayesian methodology, in vitro synergy was declared for DNA methylation inhibitor, 5-AZA-2'-deoxycytidine combined with one histone deacetylation inhibitor, suberoylanilide hydroxamic acid or trichostatin A in the cell lines HEY and SKOV3. This suggests potential new epigenetic therapies in cell growth inhibition of ovarian cancer cells.
Resumo:
Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^
Resumo:
Next-generation DNA sequencing platforms can effectively detect the entire spectrum of genomic variation and is emerging to be a major tool for systematic exploration of the universe of variants and interactions in the entire genome. However, the data produced by next-generation sequencing technologies will suffer from three basic problems: sequence errors, assembly errors, and missing data. Current statistical methods for genetic analysis are well suited for detecting the association of common variants, but are less suitable to rare variants. This raises great challenge for sequence-based genetic studies of complex diseases.^ This research dissertation utilized genome continuum model as a general principle, and stochastic calculus and functional data analysis as tools for developing novel and powerful statistical methods for next generation of association studies of both qualitative and quantitative traits in the context of sequencing data, which finally lead to shifting the paradigm of association analysis from the current locus-by-locus analysis to collectively analyzing genome regions.^ In this project, the functional principal component (FPC) methods coupled with high-dimensional data reduction techniques will be used to develop novel and powerful methods for testing the associations of the entire spectrum of genetic variation within a segment of genome or a gene regardless of whether the variants are common or rare.^ The classical quantitative genetics suffer from high type I error rates and low power for rare variants. To overcome these limitations for resequencing data, this project used functional linear models with scalar response to develop statistics for identifying quantitative trait loci (QTLs) for both common and rare variants. To illustrate their applications, the functional linear models were applied to five quantitative traits in Framingham heart studies. ^ This project proposed a novel concept of gene-gene co-association in which a gene or a genomic region is taken as a unit of association analysis and used stochastic calculus to develop a unified framework for testing the association of multiple genes or genomic regions for both common and rare alleles. The proposed methods were applied to gene-gene co-association analysis of psoriasis in two independent GWAS datasets which led to discovery of networks significantly associated with psoriasis.^
Resumo:
The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^
Resumo:
Objective. The purpose of the study is to provide a holistic depiction of behavioral & environmental factors contributing to risky sexual behaviors among predominantly high school educated, low-income African Americans residing in urban areas of Houston, TX utilizing the Theory of Gender and Power, Situational/Environmental Variables Theory, and Sexual Script Theory. Methods. A cross-sectional study was conducted via questionnaires among 215 Houston area residents, 149 were women and 66 were male. Measures used to assess behaviors of the population included a history of homelessness, use of crack/cocaine among several other illicit drugs, the type of sexual partner, age of participant, age of most recent sex partner, whether or not participants sought health care in the last 12 months, knowledge of partner's other sexual activities, symptoms of depression, and places where partner's were met. In an effort to determine risk of sexual encounters, a risk index employing the variables used to assess condom use was created categorizing sexual encounters as unsafe or safe. Results. Variables meeting the significance level of p<.15 for the bivariate analysis of each theory were entered into a binary logistic regression analysis. The block for each theory was significant, suggesting that the grouping assignments of each variable by theory were significantly associated with unsafe sexual behaviors. Within the regression analysis, variables such as sex for drugs/money, low income, and crack use demonstrated an effect size of ≥ ± 1, indicating that these variables had a significant effect on unsafe sexual behavioral practices. Conclusions. Variables assessing behavior and environment demonstrated a significant effect when categorized by relation to designated theories.
Resumo:
Objective. The purpose of the study is to provide a holistic depiction of behavioral & environmental factors contributing to risky sexual behaviors among predominantly high school educated, low-income African Americans residing in urban areas of Houston, TX utilizing the Theory of Gender and Power, Situational/Environmental Variables Theory, and Sexual Script Theory. ^ Methods. A cross-sectional study was conducted via questionnaires among 215 Houston area residents, 149 were women and 66 were male. Measures used to assess behaviors of the population included a history of homelessness, use of crack/cocaine among several other illicit drugs, the type of sexual partner, age of participant, age of most recent sex partner, whether or not participants sought health care in the last 12 months, knowledge of partner's other sexual activities, symptoms of depression, and places where partner's were met. In an effort to determine risk of sexual encounters, a risk index employing the variables used to assess condom use was created categorizing sexual encounters as unsafe or safe. ^ Results. Variables meeting the significance level of p<.15 for the bivariate analysis of each theory were entered into a binary logistic regression analysis. The block for each theory was significant, suggesting that the grouping assignments of each variable by theory were significantly associated with unsafe sexual behaviors. Within the regression analysis, variables such as sex for drugs/money, low income, and crack use demonstrated an effect size of ≥±1, indicating that these variables had a significant effect on unsafe sexual behavioral practices. ^ Conclusions. Variables assessing behavior and environment demonstrated a significant effect when categorized by relation to designated theories. ^
Resumo:
The Internet, and specifically web 2.0 social media applications, offers an innovative method for communicating child health information to low-income parents. The main objective of this study was to use qualitative data to determine the value of using social media to reach low-income parents with child health information. A qualitative formative evaluation employing focus groups was used to determine the value of using social media for dissemination of child health information. Inclusion criteria included: (1) a parent with a child that attends a school in a designated Central Texas school district; and (2) English-speaking. The students who attend these schools are generally economically disadvantaged and are predominately Hispanic. The classic analysis strategy was used for data analysis. Focus group participants (n=19) were female (95%); White (53%), Hispanic (42%) or African American (5%); and received government assistance (63%). Most had access to the Internet (74%) and were likely to have low health literacy (53%). The most preferred source of child health information was the family pediatrician or general practitioner. Many participants were familiar with social media applications and had profiles on popular social networking sites, but used them infrequently. Objections to social media sites as sources of child health information included lack of credibility and parent time. Social media has excellent potential for reaching low-income parents when used as part of a multi-channel communication campaign. Further research should focus on the most effective type and format of messages that can promote behavior change in this population, such as story-telling. ^