7 resultados para low drug dose
em DigitalCommons@The Texas Medical Center
Resumo:
Shigellosis is a communicable disease harbored primarily by humans. The low infective dose, no vaccine availability, and mild or asymptomatic nature of disease has prevented eradication of Shigella in the United States. In addition, the lack of water and sewage infrastructures which normally contribute to the spread of disease in developing countries, for the most part, is a non-issue in the U.S. making surveillance and risk factor identification important prevention and control measures utilized to reduce the incidence rates of Shigellosis.^ The purpose of this study was to describe the Shigellosis disease burden among the Hidalgo County, Texas population during the 2005-2009 study period and compare these findings with national data available. The potential identification and publication of a health disparity in the form of increased Shigellosis rates among Hidalgo County residents when compared to national rates, especially age-specific rates, are intended to generate public health attention and public health action that will address this issue.^ There were 1,007 confirmed Shigellosis cases reported in Hidalgo County, Texas. An overwhelming majority (79%) of the Shigellosis cases during this time frame occurred in children less than ten years of age. Over the age of 10 through the age of 39, females constituted the majority of cases. Age-specific rates for children four years of age and younger were compared to national rates. The rates for Hidalgo County were higher at 9.2 and 1.8 cases for every one case reported nationally in 2005 and 2006, respectively. The total crude rates of Shigellosis were also higher than the rates available from the Foodborne Diseases Active Surveillance Network (FoodNet) of CDC’s Emerging Infections Program from 2005-2009. As a result, compared to the FoodNet surveillance rates, Hidalgo County experienced above average rates of Shigellosis throughout the study period. The majority of cases were identified in young children under the age of ten.^ The information gathered in this analysis could be used to implement and monitor infection control measures such as hand washing education at facilities that tend to the groups identified at higher risk of infection. In addition, the higher burden of disease found in Hidalgo County requires further study to determine if there are factors associated with an increased risk of Shigellosis in this community and other border communities along the U.S.-Mexico border exist.^
Resumo:
The considerable search for synergistic agents in cancer research is motivated by the therapeutic benefits achieved by combining anti-cancer agents. Synergistic agents make it possible to reduce dosage while maintaining or enhancing a desired effect. Other favorable outcomes of synergistic agents include reduction in toxicity and minimizing or delaying drug resistance. Dose-response assessment and drug-drug interaction analysis play an important part in the drug discovery process, however analysis are often poorly done. This dissertation is an effort to notably improve dose-response assessment and drug-drug interaction analysis. The most commonly used method in published analysis is the Median-Effect Principle/Combination Index method (Chou and Talalay, 1984). The Median-Effect Principle/Combination Index method leads to inefficiency by ignoring important sources of variation inherent in dose-response data and discarding data points that do not fit the Median-Effect Principle. Previous work has shown that the conventional method yields a high rate of false positives (Boik, Boik, Newman, 2008; Hennessey, Rosner, Bast, Chen, 2010) and, in some cases, low power to detect synergy. There is a great need for improving the current methodology. We developed a Bayesian framework for dose-response modeling and drug-drug interaction analysis. First, we developed a hierarchical meta-regression dose-response model that accounts for various sources of variation and uncertainty and allows one to incorporate knowledge from prior studies into the current analysis, thus offering a more efficient and reliable inference. Second, in the case that parametric dose-response models do not fit the data, we developed a practical and flexible nonparametric regression method for meta-analysis of independently repeated dose-response experiments. Third, and lastly, we developed a method, based on Loewe additivity that allows one to quantitatively assess interaction between two agents combined at a fixed dose ratio. The proposed method makes a comprehensive and honest account of uncertainty within drug interaction assessment. Extensive simulation studies show that the novel methodology improves the screening process of effective/synergistic agents and reduces the incidence of type I error. We consider an ovarian cancer cell line study that investigates the combined effect of DNA methylation inhibitors and histone deacetylation inhibitors in human ovarian cancer cell lines. The hypothesis is that the combination of DNA methylation inhibitors and histone deacetylation inhibitors will enhance antiproliferative activity in human ovarian cancer cell lines compared to treatment with each inhibitor alone. By applying the proposed Bayesian methodology, in vitro synergy was declared for DNA methylation inhibitor, 5-AZA-2'-deoxycytidine combined with one histone deacetylation inhibitor, suberoylanilide hydroxamic acid or trichostatin A in the cell lines HEY and SKOV3. This suggests potential new epigenetic therapies in cell growth inhibition of ovarian cancer cells.
Resumo:
5-aza-2'-deoxycytidine (DAC) is a cytidine analogue that strongly inhibits DNA methylation, and was recently approved for the treatment of myelodysplastic syndromes (MDS). To maximize clinical results with DAC, we investigated its use as an anti-cancer drug. We also investigated mechanisms of resistance to DAC in vitro in cancer cell lines and in vivo in MDS patients after relapse. We found DAC sensitized cells to the effect of 1-β-D-Arabinofuranosylcytosine (Ara-C). The combination of DAC and Ara-C or Ara-C following DAC showed additive or synergistic effects on cell death in four human leukemia cell lines in vitro, but antagonism in terms of global methylation. RIL gene activation and H3 lys-9 acetylation of short interspersed elements (Alu). One possible explanation is that hypomethylated cells are sensitized to cell killing by Ara-C. Turning to resistance, we found that the IC50 of DAC differed 1000 fold among and was correlated with the dose of DAC that induced peak hypomethylation of long interspersed nuclear elements (LINE) (r=0.94, P<0.001), but not with LINE methylation at baseline (r=0.05, P=0.97). Sensitivity to DAC did not significantly correlate with sensitivity to another hypomethylating agent 5-azacytidine (AZA) (r=0.44, P=0.11). The cell lines most resistant to DAC had low dCK, hENT1, and hENT2 transporters and high cytosine deaminase (CDA). In an HL60 leukemia cell line, resistance to DAC could be rapidly induced by drug exposure, and was related to a switch from monoallelic to biallelic mutation of dCK or a loss of wild type DCK allele. Furthermore, we showed that DAC induced DNA breaks evidenced by histone H2AX phosphorylation and increased homologous recombination rates 7-10 folds. Finally, we found there were no dCK mutations in MDS patients after relapse. Cytogenetics showed that three of the patients acquired new abnormalities at relapse. These data suggest that in vitro spontaneous and acquired resistance to DAC can be explained by insufficient incorporation of drug into DNA. In vivo resistance to DAC is likely due to methylation-independent pathways such as chromosome changes. The lack of cross resistance between DAC and AZA is of potential clinical relevance, as is the combination of DAC and Ara-C. ^
Resumo:
Despite the availability of hepatitis B vaccine for over two decades, drug users and other high-risk adult populations have experienced low vaccine coverage. Poor compliance has limited efforts to reduce transmission of hepatitis B infection in this population. Evidence suggests that immunological response in drug users is impaired compared to the general population, both in terms of lower seroprotection rates and antibodies levels.^ The current study investigated the effectiveness of the multi-dose hepatitis B vaccine and compared the effect of the standard and accelerated vaccine schedules in a not-in-treatment, drug-using adult population in the city of Houston, USA.^ A population of drug-users from two communities in Houston, susceptible to hepatitis B, was sampled by outreach workers and referral methodology. Subjects were randomized either to the standard hepatitis vaccine schedule (0, 1-, 6-month) or to an accelerated schedule (0, 1-, 2-month). Antibody levels were detected through laboratory analyses at various time-points. The participants were followed for two years and seroconversion rates were calculated to determine immune response.^ A four percent difference in the overall compliance rate was observed between the standard (73%) and accelerated schedules (77%). Logistic regression analyses showed that drug users living on the streets were twice as likely to not complete all three vaccine doses (p=0.028), and current speedball use was also associated with non-completion (p=0.002). Completion of all three vaccinations in the multivariate analysis was also correlated with older age. Drug users on the accelerated schedule were 26% more likely to achieve completion, although this factor was marginally significant (p=0.085).^ Cumulative adequate protective response was gained by 65% of the HBV susceptible subgroup by 12-months and was identical for both the standard and accelerated schedules. Excess protective response (>=100 mIU/mL) occurred with greater frequency at the later period for the standard schedule (36% at 12-months compared to 14% at six months), while the greater proportion of excess protective response for the accelerated schedule occurred earlier (34% at 6 months compared to 18% at 12-months). Seroconversion at the adequate protective response level of 10 mIU/mL was reached by the accelerated schedule group at a quicker rate (62% vs. 49%), and with a higher mean titer (104.8 vs. 64.3 mIU/mL), when measured at six months. Multivariate analyses indicated a 63% increased risk of non-response for older age and confirmed the existence of an accelerating decline in immune response to vaccination manifesting after 40 years (p=0.001). Injecting more than daily was also highly associated with the risk of non-response (p=0.016).^ The substantial increase in the seroprotection rate at six months may be worth the trade-off against the faster antibody titer decrease and is recommended for enhancing compliance and seroconversion. Utilization of the accelerated schedule with the primary objective of increasing compliance and seroconversion rates during the six months after the first dose may confer early protective immunity and reduce the HBV vulnerability of drug users who continue, or have recently initiated, increased high risk drug use and sexual behaviors.^
Resumo:
Opioids dominate the field of pain management because of their ability to provide analgesia in many medical circumstances. However, side effects including respiratory depression, constipation, tolerance, physical dependence, and the risk of addiction limit their clinical utility. Fear of these side effects results in the under-treatment of acute pain. For many years, research has focused on ways to improve the therapeutic index (the ratio of desirable analgesic effects to undesirable side effects) of opioids. One strategy, combining opioid agonists that bind to different opioid receptor types, may prove successful.^ We discovered that subcutaneous co-administration of a moderately analgesic dose of the mu-opioid receptor (MOR) selective agonist fentanyl (20μg/kg) with subanalgesic doses of the less MOR-specific agonist morphine (100ng/kg-100μg/kg), augmented acute fentanyl analgesia in rats. Parallel [35S]GTPγS binding studies using naïve rat substantia gelatinosa membrane treated with fentanyl (4μM) and morphine (1nM-1pM) demonstrated a 2-fold increase in total G-protein activation. This correlation between morphine-induced augmentation of fentanyl analgesia and G-protein activation led to our proposal that interactions between MORs and DORs underlie opioid-induced augmentation. We discovered that morphine-induced augmentation of fentanyl analgesia and G-protein activity was mediated by DORs. Adding the DOR-selective antagonist naltrindole (200ng/kg, 40nM) at doses that did not alter the analgesic or G-protein activation of fentanyl, blocked increases in analgesia and G-protein activation induced by fentanyl/morphine combinations. Equivalent doses of the MOR-selective antagonist cyprodime (20ng/kg, 4nM) did not block augmentation. Substitution of the DOR-selective agonist SNC80 for morphine yielded similar results, further supporting our conclusion that interactions between MORs and DORs are responsible for morphine-induced augmentation of fentanyl analgesia and G-protein activation. Confocal microscopy of rat substantia gelatinosa showed that changes in the rate of opioid receptor internalization did not account for these effects.^ In conclusion, fentanyl analgesia augmentation by subanalgesic morphine is mediated by increased G-protein activation resulting from functional interactions between MORs and DORs, not changes in MOR internalization. Additional animal and clinical studies are needed to determine whether side effect incidence changes following opioid co-administration. If side effect incidence decreases or remains unchanged, these findings could have important implications for clinical pain treatment. ^
Resumo:
Conventional designs of animal bioassays allocate the same number of animals into control and dose groups to explore the spontaneous and induced tumor incidence rates, respectively. The purpose of such bioassays are (a) to determine whether or not the substance exhibits carcinogenic properties, and (b) if so, to estimate the human response at relatively low doses. In this study, it has been found that the optimal allocation to the experimental groups which, in some sense, minimize the error of the estimated response for low dose extrapolation is associated with the dose level and tumor risk. The number of dose levels has been investigated at the affordable experimental cost. The pattern of the administered dose, 1 MTD, 1/2 MTD, 1/4 MTD,....., etc. plus control, gives the most reasonable arrangement for the low dose extrapolation purpose. The arrangement of five dose groups may make the highest dose trivial. A four-dose design can circumvent this problem and has also one degree of freedom for testing the goodness-of-fit of the response model.^ An example using the data on liver tumors induced in mice in a lifetime study of feeding dieldrin (Walker et al., 1973) is implemented with the methodology. The results are compared with conclusions drawn from other studies. ^
Resumo:
Conservative procedures in low-dose risk assessment are used to set safety standards for known or suspected carcinogens. However, the assumptions upon which the methods are based and the effects of these methods are not well understood.^ To minimize the number of false-negatives and to reduce the cost of bioassays, animals are given very high doses of potential carcinogens. Results must then be extrapolated to much smaller doses to set safety standards for risks such as one per million. There are a number of competing methods that add a conservative safety factor into these calculations.^ A method of quantifying the conservatism of these methods was described and tested on eight procedures used in setting low-dose safety standards. The results using these procedures were compared by computer simulation and by the use of data from a large scale animal study.^ The method consisted of determining a "true safe dose" (tsd) according to an assumed underlying model. If one assumed that Y = the probability of cancer = P(d), a known mathematical function of the dose, then by setting Y to some predetermined acceptable risk, one can solve for d, the model's "true safe dose".^ Simulations were generated, assuming a binomial distribution, for an artificial bioassay. The eight procedures were then used to determine a "virtual safe dose" (vsd) that estimates the tsd, assuming a risk of one per million. A ratio R = ((tsd-vsd)/vsd) was calculated for each "experiment" (simulation). The mean R of 500 simulations and the probability R $<$ 0 was used to measure the over and under conservatism of each procedure.^ The eight procedures included Weil's method, Hoel's method, the Mantel-Byran method, the improved Mantel-Byran, Gross's method, fitting a one-hit model, Crump's procedure, and applying Rai and Van Ryzin's method to a Weibull model.^ None of the procedures performed uniformly well for all types of dose-response curves. When the data were linear, the one-hit model, Hoel's method, or the Gross-Mantel method worked reasonably well. However, when the data were non-linear, these same methods were overly conservative. Crump's procedure and the Weibull model performed better in these situations. ^