8 resultados para longest monotone subsequence

em DigitalCommons@The Texas Medical Center


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The considerable search for synergistic agents in cancer research is motivated by the therapeutic benefits achieved by combining anti-cancer agents. Synergistic agents make it possible to reduce dosage while maintaining or enhancing a desired effect. Other favorable outcomes of synergistic agents include reduction in toxicity and minimizing or delaying drug resistance. Dose-response assessment and drug-drug interaction analysis play an important part in the drug discovery process, however analysis are often poorly done. This dissertation is an effort to notably improve dose-response assessment and drug-drug interaction analysis. The most commonly used method in published analysis is the Median-Effect Principle/Combination Index method (Chou and Talalay, 1984). The Median-Effect Principle/Combination Index method leads to inefficiency by ignoring important sources of variation inherent in dose-response data and discarding data points that do not fit the Median-Effect Principle. Previous work has shown that the conventional method yields a high rate of false positives (Boik, Boik, Newman, 2008; Hennessey, Rosner, Bast, Chen, 2010) and, in some cases, low power to detect synergy. There is a great need for improving the current methodology. We developed a Bayesian framework for dose-response modeling and drug-drug interaction analysis. First, we developed a hierarchical meta-regression dose-response model that accounts for various sources of variation and uncertainty and allows one to incorporate knowledge from prior studies into the current analysis, thus offering a more efficient and reliable inference. Second, in the case that parametric dose-response models do not fit the data, we developed a practical and flexible nonparametric regression method for meta-analysis of independently repeated dose-response experiments. Third, and lastly, we developed a method, based on Loewe additivity that allows one to quantitatively assess interaction between two agents combined at a fixed dose ratio. The proposed method makes a comprehensive and honest account of uncertainty within drug interaction assessment. Extensive simulation studies show that the novel methodology improves the screening process of effective/synergistic agents and reduces the incidence of type I error. We consider an ovarian cancer cell line study that investigates the combined effect of DNA methylation inhibitors and histone deacetylation inhibitors in human ovarian cancer cell lines. The hypothesis is that the combination of DNA methylation inhibitors and histone deacetylation inhibitors will enhance antiproliferative activity in human ovarian cancer cell lines compared to treatment with each inhibitor alone. By applying the proposed Bayesian methodology, in vitro synergy was declared for DNA methylation inhibitor, 5-AZA-2'-deoxycytidine combined with one histone deacetylation inhibitor, suberoylanilide hydroxamic acid or trichostatin A in the cell lines HEY and SKOV3. This suggests potential new epigenetic therapies in cell growth inhibition of ovarian cancer cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Glucagon is a 29 amino acid polypeptide hormone produced in the (alpha) cells of the pancreatic islets. The purpose of this research was to understand better the role of glucagon in the regulation of metabolic processes. As with other polypeptide hormones, the synthesis of glucagon is thought to involve a larger precursor, which is then enzymatically cleaved to the functional form. The specific research objectives were to obtain cloned copies of the messenger RNA (mRNA) for pancreatic glucagon, to determine their primary sequences, and from this coding information to deduce the amino acid sequence of the initial glucagon precursor. From this suggested preproglucagon sequence and prior information on possible proglucagon intermediate processing products, the overall objective of this research is to propose a possible pathway for the biosynthesis of pancreatic glucagon.^ Synthetic oligodeoxynucleotide probes of 14-nucleotides (14-mer) and 17-nucleotides (a 17-mer) complementary to codons specifying a unique sequence of mature glucagon were synthesized. The ('32)P-labeled-14-mer was hybridized with size-fractionated fetal bovine pancreatic poly(A('+))RNA bound to nitrocellulose. RNA fractions of (TURN)14S were found to hybridize specifically, resulting in an (TURN)10-fold enrichment for these sequences. These poly(A('+))RNAs were translated in a cell-free system and the products analyzed by gel electrophoresis. The translation products were found to be enriched for a protein of the putative size of mammalian preproglucagon ((TURN)21 kd). These enriched RNA fractions were used to construct a complementary DNA (cDNA) library is plasmid pBR322.^ Screening of duplicate colony filters with the ('32)P-labeled-17-mer and a ('32)P-labeled-17-mer-primed cDNA probe indicated 25 possible glucagon clones from 3100 colonies screened. Restriction mapping of 6 of these clones suggested that they represented a single mRNA species. Primary sequence analysis of one clone containing a 1200 base pair DNA insert revealed that it contained essentially a full-length copy of glucagon cDNA.^ Analaysis of the cDNA suggested that it encoded an initial translation product of 180 amino acids with an M(,r) = 21 kd. The first initiation codon (ATG, methionine) followed by the longest open reading frame of 540 nucleotides was preceded by a 5'-untranslated region of 90 nucleotides, and was followed by a longer 3'-untranslated region of 471 nucleotides, resulting in a total of 1101 nucleotides. . . . (Author's abstract exceeds stipulated maximum length. Discontinued here with permission of author.) UMI ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Problem/purpose. The specific aim of this focused ethnography was to provide insight into the experience of aging of the American Indian (AI) elder as demonstrated by one tribe, the Zuni of New Mexico. Discovering how Zuni elders construct the experience of aging and the associated behaviors allowed the researcher to deconstruct aging and then re-present it in a cogent description for this population. Such a description is lacking in the literature and will be useful in planning for culturally relevant eldercare services. ^ Methods. Ethnographic field techniques were used to sample from elders, pueblo members-at-large, activities, events and places. Over 1800 hrs were spent in the field spanning 14 months and five site visits, with the longest at almost 4 weeks. Developing codes for transcribed interviews, field notes, supplementary documents, photographs, videos, and artifacts was carried out during analysis. Categories and ultimately a cognitive map and model were developed which represented aging in Zuni Pueblo in 2000. ^ Findings. Zuni elders are aging in two worlds. Their primary world has been described as a sevenfold universe, a complicated structure with seven planes wherein the middle plane refers to themselves, a synthesis of all the other planes. The increasing influence of the white world has formed a ‘new middle’ out of which everyday aspects of aging are viewed. ^ Implications for nursing/gerontology. Nurses and others in gerontology must recognize that vast differences in worldviews are present between themselves and AI elders regarding health practices, spirituality, eating patterns, family roles, medicine, religion and countless other aspects of life. Their centuries old beliefs and practices drive these differences coupled with a collision with the white world. Making a paradigm shift using an appropriate lens with which to view these differences can only increase our understanding and efficacy in delivering culturally relevant care. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing literature examining the association between occupation and asthma has not been adequately powered to address this question in the food preparation or food service industries. Few studies have addressed the possible link between occupational exposure to cooking fumes and asthma. This secondary analysis of cohort study data aimed to investigate the association between adult-onset asthma and exposure to: (a) cooking fumes at work or (b) longest-held employment in food preparation or food service (e.g. waiters and waitresses, food preparation workers, non-restaurant food servers, etc.). Participants arose from a cohort of Mexican-American women residing in Houston, TX, recruited between July 2001 and June 2007. This analysis used Cox proportional-hazards regression to estimate the hazard ratio of adult-onset asthma given the exposures of interest, adjusting for age, BMI, smoking status, acculturation, and birthplace. We found a strong association between adult-onset asthma and occupational exposure to cooking fumes (hazard ratio [HR] = 1.77; 95% confidence interval [CI], 1.15, 2.72), especially in participants whose longest-held occupation was not in the food-related industry (HR = 2.12; 95% CI, 1.21, 3.60). In conclusion, adult-onset asthma is a serious public health concern for food industry workers. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In epidemiology literature, it is often required to investigate the relationships between means where the levels of experiment are actually monotone sets forming a partition on the range of sampling values. With this need, the analysis of these group means is generally performed using classical analysis of variance (ANOVA). However, this method has never been challenged. In this dissertation, we will formulate and present our examination of its validity. First, the classical assumptions of normality and constant variance are not always true. Second, under the null hypothesis of equal means, the test statistic for the classical ANOVA technique is still valid. Third, when the hypothesis of equal means is rejected, the classical analysis techniques for hypotheses of contrasts are not valid. Fourth, under the alternative hypothesis, we can show that the monotone property of levels leads to the conclusion that the means are monotone. Fifth, we propose an appropriate method for handing the data in this situation. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background. Houston, Texas, once obtained all its drinking water from underground sources. However, in 1853, the city began supplementing its water from the surface source Lake Houston. This created differences in the exposure to disinfection byproducts (DBPs) in different parts of Houston. Trihalomethanes (THMs) are the most common DBP and are useful indicators of DBPs in treated drinking water. This study examines the relationship between THMs in chlorinated drinking water and the incidence of bladder cancer in Houston. ^ Methods. Individual bladder cancer deaths, from 1975 to 2004, were assigned to four surface water exposure areas in Houston utilizing census tracts—area A used groundwater the longest, area B used treated lake water the longest, area C used treated lake water the second longest, and area D used a combination of groundwater and treated lake water. Within each surface water exposure area mortality rates were calculated in 5 year intervals by four race-gender categories. Linear regression models were fitted to the bladder cancer mortality rates over the entire period of available data (1990–2004). ^ Results. A decrease in bladder cancer mortality was observed amongst white males in area B (p = 0.030), white females in area A (p = 0.008), non-white males in area D (p = 0.003), and non-white females in areas A and B (p = 0.002 & 0.001). Bladder cancer mortality differed by race-gender and time (p ≤ 0.001 & p ≤ 0.001), but not by surface water exposure area (p = 0.876). ^ Conclusion. The relationship between bladder cancer mortality and the four surface water exposure areas (signifying THM exposure) was insignificant. This result could be attributable to Houston controlling for THMs starting in the early 1980’s by using chloramine as a secondary disinfectant in the drinking water purification process.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The relationship between occupational exposures and glioma has not been adequately assessed due to the lack of studies in current scientific literature. To address this disparity, the Harris County Brain Tumor Study, an ongoing population-based case-control study, began in January 2001. Longest-held occupation for 382 cases and 629 controls were frequency matched on age (within 5 years), sex, and race and placed into 14 predetermined occupational categories. Adjusted odds ratios and 95% confidence intervals were calculated for each category using multiple logistic regression. Potential confounders assessed included sex, age, smoking status, education and income. For all subjects, significantly elevated adjusted odds ratios were found in health-related (aOR=1.66; 95%CI=1.03, 2.68), teaching (aOR=1.84; 95%CI=1.17, 2.88), and protective service (aOR=3.6; 95%CI=1.05, 12.31) occupational categories after controlling for sex and education. A significantly lowered odds ratio was seen in the writers, artists, and entertainers category (aOR=0.14; 95%CI=0.03, 0.58). In the stratified analyses, which controlled for education, males had a significantly elevated odds ratio for protective service workers (aOR=4.83; 95%CI=1.24, 18.83) while a significantly lower odds ratio was found in mechanics and machine operators (aOR=0.33; 95%CI=0.12,0.87). In females, we observed a significantly elevated odds ratio in teachers (aOR=1.99; 95%CI=1.20,3.31) and a significantly lower odds ratio in clerical workers (aOR=0.63; 95%CI=0.45,0.90). These analyses revealed several significant associations and allowed for separate analyses by gender, distinguishing this study from many glioma studies. Further analyses should provide a large enough sample size to stratify by gender as well as histological subtype.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: In this secondary data analysis, three statistical methodologies were implemented to handle cases with missing data in a motivational interviewing and feedback study. The aim was to evaluate the impact that these methodologies have on the data analysis. ^ Methods: We first evaluated whether the assumption of missing completely at random held for this study. We then proceeded to conduct a secondary data analysis using a mixed linear model to handle missing data with three methodologies (a) complete case analysis, (b) multiple imputation with explicit model containing outcome variables, time, and the interaction of time and treatment, and (c) multiple imputation with explicit model containing outcome variables, time, the interaction of time and treatment, and additional covariates (e.g., age, gender, smoke, years in school, marital status, housing, race/ethnicity, and if participants play on athletic team). Several comparisons were conducted including the following ones: 1) the motivation interviewing with feedback group (MIF) vs. the assessment only group (AO), the motivation interviewing group (MIO) vs. AO, and the intervention of the feedback only group (FBO) vs. AO, 2) MIF vs. FBO, and 3) MIF vs. MIO.^ Results: We first evaluated the patterns of missingness in this study, which indicated that about 13% of participants showed monotone missing patterns, and about 3.5% showed non-monotone missing patterns. Then we evaluated the assumption of missing completely at random by Little's missing completely at random (MCAR) test, in which the Chi-Square test statistic was 167.8 with 125 degrees of freedom, and its associated p-value was p=0.006, which indicated that the data could not be assumed to be missing completely at random. After that, we compared if the three different strategies reached the same results. For the comparison between MIF and AO as well as the comparison between MIF and FBO, only the multiple imputation with additional covariates by uncongenial and congenial models reached different results. For the comparison between MIF and MIO, all the methodologies for handling missing values obtained different results. ^ Discussions: The study indicated that, first, missingness was crucial in this study. Second, to understand the assumptions of the model was important since we could not identify if the data were missing at random or missing not at random. Therefore, future researches should focus on exploring more sensitivity analyses under missing not at random assumption.^