854 resultados para drug dose increase
Resumo:
In chronic myelogenous leukemia (CML), oncogenic BCR-ABL1 activates the Wnt pathway, which is fundamental for leukemia stem cell (LSC) maintenance. Tyrosine kinase inhibitor (TKI) treatment reduces Wnt signaling in LSCs and often results in molecular remission of CML; however, LSCs persist long term despite BCR-ABL1 inhibition, ultimately causing disease relapse. We demonstrate that TKIs induce the expression of the tumor necrosis factor (TNF) family ligand CD70 in LSCs by down-regulating microRNA-29, resulting in reduced CD70 promoter DNA methylation and up-regulation of the transcription factor specificity protein 1. The resulting increase in CD70 triggered CD27 signaling and compensatory Wnt pathway activation. Combining TKIs with CD70 blockade effectively eliminated human CD34(+) CML stem/progenitor cells in xenografts and LSCs in a murine CML model. Therefore, targeting TKI-induced expression of CD70 and compensatory Wnt signaling resulting from the CD70/CD27 interaction is a promising approach to overcoming treatment resistance in CML LSCs.
Resumo:
Ninety-one Swiss veal farms producing under a label with improved welfare standards were visited between August and December 2014 to investigate risk factors related to antimicrobial drug use and mortality. All herds consisted of own and purchased calves, with a median of 77.4% of purchased calves. The calves' mean age was 29±15days at purchasing and the fattening period lasted at average 120±28 days. The mean carcass weight was 125±12kg. A mean of 58±33 calves were fattened per farm and year, and purchased calves were bought from a mean of 20±17 farms of origin. Antimicrobial drug treatment incidence was calculated with the defined daily dose methodology. The mean treatment incidence (TIADD) was 21±15 daily doses per calf and year. The mean mortality risk was 4.1%, calves died at a mean age of 94±50 days, and the main causes of death were bovine respiratory disease (BRD, 50%) and gastro-intestinal disease (33%). Two multivariable models were constructed, for antimicrobial drug treatment incidence (53 farms) and mortality (91 farms). No quarantine, shared air space for several groups of calves, and no clinical examination upon arrival at the farm were associated with increased antimicrobial treatment incidence. Maximum group size and weight differences >100kg within a group were associated with increased mortality risk, while vaccination and beef breed were associated with decreased mortality risk. The majority of antimicrobial treatments (84.6%) were given as group treatments with oral powder fed through an automatic milk feeding system. Combination products containing chlortetracycline with tylosin and sulfadimidine or with spiramycin were used for 54.9%, and amoxicillin for 43.7% of the oral group treatments. The main indication for individual treatment was BRD (73%). The mean age at the time of treatment was 51 days, corresponding to an estimated weight of 80-100kg. Individual treatments were mainly applied through injections (88.5%), and included administration of fluoroquinolones in 38.3%, penicillines (amoxicillin or benzylpenicillin) in 25.6%, macrolides in 13.1%, tetracyclines in 12.0%, 3th and 4th generation cephalosporines in 4.7%, and florfenicol in 3.9% of the cases. The present study allowed for identifying risk factors for increased antimicrobial drug treatment and mortality. This is an important basis for future studies aiming at reducing treatment incidence and mortality in veal farms. Our results indicate that improvement is needed in the selection of drugs for the treatment of veal calves according to the principles of prudent use of antibiotics.
Resumo:
OBJECTIVE Due to reduction of immune-suppressive drugs, patients with rheumatic diseases can experience an increase in disease activity during pregnancy. In such cases, TNF-inhibitors may be prescribed. However, monoclonal antibodies with the Fc moiety are actively transported across the placenta, resulting in therapeutic drug levels in the newborn. As certolizumab (CZP) lacks the Fc moiety, it may bear a lower risk for the child. METHOD We report a case series of thirteen patients (5 with rheumatoid arthritis and 8 with spondyloarthritis) treated with CZP during late pregnancy to control disease activity. RESULT CZP measured in cord blood of eleven infants ranged between undetectable levels and 1μg/mL whereas the median CZP level of maternal plasma was 32.97μg/mL. Three women developed an infection during the third trimester, of whom one had a severe infection and one had an infection that resulted in a pre-term delivery. During the postpartum period, 6 patients remained on CZP while breastfeeding. CZP levels in the breast milk of two breastfeeding patients were undetectable. CONCLUSION The lack of the active transplacental transfer of CZP gives the possibility to treat inflammatory arthritis during late gestation without potential harm to the newborn. However, in pregnant women treated with TNF-inhibitors and prednisone, attention should be given to the increased susceptibility to infections, which might cause prematurity. CZP treatment can be continued while breastfeeding.
Resumo:
Small chemicals like drugs tend to bind to proteins via noncovalent bonds, e.g. hydrogen bonds, salt bridges or electrostatic interactions. Some chemicals interact with other molecules than the actual target ligand, representing so-called 'off-target' activities of drugs. Such interactions are a main cause of adverse side effects to drugs and are normally classified as predictable type A reactions. Detailed analysis of drug-induced immune reactions revealed that off-target activities also affect immune receptors, such as highly polymorphic human leukocyte antigens (HLA) or T cell receptors (TCR). Such drug interactions with immune receptors may lead to T cell stimulation, resulting in clinical symptoms of delayed-type hypersensitivity. They are assigned the 'pharmacological interaction with immune receptors' (p-i) concept. Analysis of p-i has revealed that drugs bind preferentially or exclusively to distinct HLA molecules (p-i HLA) or to distinct TCR (p-i TCR). P-i reactions differ from 'conventional' off-target drug reactions as the outcome is not due to the effect on the drug-modified cells themselves, but is the consequence of reactive T cells. Hence, the complex and diverse clinical manifestations of delayed-type hypersensitivity are caused by the functional heterogeneity of T cells. In the abacavir model of p-i HLA, the drug binding to HLA may result in alteration of the presenting peptides. More importantly, the drug binding to HLA generates a drug-modified HLA, which stimulates T cells directly, like an allo-HLA. In the sulfamethoxazole model of p-i TCR, responsive T cells likely require costimulation for full T cell activation. These findings may explain the similarity of delayed-type hypersensitivity reactions to graft-versus-host disease, and how systemic viral infections increase the risk of delayed-type hypersensitivity reactions.
Resumo:
Pregnant BALB/c mice have been widely used as an in vivo model to study Neospora caninum infection biology and to provide proof-of-concept for assessments of drugs and vaccines against neosporosis. The fact that this model has been used with different isolates of variable virulence, varying infection routes and differing methods to prepare the parasites for infection, has rendered the comparison of results from different laboratories impossible. In most studies, mice were infected with similar number of parasites (2 × 10(6)) as employed in ruminant models (10(7) for cows and 10(6) for sheep), which seems inappropriate considering the enormous differences in the weight of these species. Thus, for achieving meaningful results in vaccination and drug efficacy experiments, a refinement and standardization of this experimental model is necessary. Thus, 2 × 10(6), 10(5), 10(4), 10(3) and 10(2) tachyzoites of the highly virulent and well-characterised Nc-Spain7 isolate were subcutaneously inoculated into mice at day 7 of pregnancy, and clinical outcome, vertical transmission, parasite burden and antibody responses were compared. Dams from all infected groups presented nervous signs and the percentage of surviving pups at day 30 postpartum was surprisingly low (24%) in mice infected with only 10(2) tachyzoites. Importantly, infection with 10(5) tachyzoites resulted in antibody levels, cerebral parasite burden in dams and 100% mortality rate in pups, which was identical to infection with 2 × 10(6) tachyzoites. Considering these results, it is reasonable to lower the challenge dose to 10(5) tachyzoites in further experiments when assessing drugs or vaccine candidates.
Resumo:
Orthodontic tooth movement requires external orthodontic forces to be converted to cellular signals that result in the coordinated removal of bone on one side of the tooth (compression side) by osteoclasts, and the formation of new bone by osteoblasts on the other side (tension side). The length of orthodontic treatment can take several years, leading to problems of caries, periodontal disease, root resorption, and patient dissatisfaction. It appears that the velocity of tooth movement is largely dependent on the rate of alveolar bone remodeling. Pharmacological approaches to increase the rate of tooth movement are limited due to patient discomfort, severe root resorption, and drug-induced side effects. Recently, externally applied, cyclical, low magnitude forces (CLMF) have been shown to cause an increase in the bone mineral density of long bones, and in the growth of craniofacial structures in a variety of animal models. In addition, CLMF is well tolerated by the patient and produces no known adverse effects. However, its application in orthodontic tooth movement has not been specifically determined. Since factors that increase alveolar bone remodeling enhance the rate of orthodontic tooth movement, we hypothesized that externally applied, cyclical, low magnitude forces (CLMF) will increase the rate of orthodontic tooth movement. In order to test this hypothesis we used an in vivo rat orthodontic tooth movement model. Our specific aims were: Specific Aim 1: To develop an in vivo rat model for tooth movement. We developed a tooth movement model based upon two established rodent models (Ren and Yoshimatsu et al, See Figure 1.). The amount of variation of tooth movement in rats exposed to 25-60 g of mesial force activated viii from the first molar to the incisor for 4 weeks was calculated. Specific Aim 2: To determine the frequency dose response of externally applied, cyclical, low magnitude forces (CLMF) for maximal tooth movement and osteoclast numbers. Our working hypothesis for this aim was that the amount of tooth movement would be dose dependent on the frequency of application of the CLMF. In order to test this working hypothesis, we varied the frequency of the CLMF from 30, 60, 100, and 200 Hz, 0.4N, two times per week, for 10 minutes for 4 weeks, and measured the amount of tooth movement. We also looked at the number of osteoclasts for the different frequencies; we hypothesized an increase in osteoclasts for the dose respnse of different frequencies. Specific Aim 3: To determine the effects of externally applied, cyclical, low magnitude forces (CLMF) on PDL proliferation. Our working hypothesis for this aim was that PDL proliferation would increase with CLMF. In order to test this hypothesis we compared CLMF (30 Hz, 0.4N, two times per week, for 10 minutes for 4 weeks) performed on the left side (experimental side), to the non-CLMF side, on the right (control side). This was an experimental study with 24 rats in total. The experimental group contained fifteen (15) rats in total, and they all received a spring plus a different frequency of CLMF. Three (3) received a spring and CLMF at 30 Hz, 0.4N for 10 minutes. Six (6) received a spring and CLMF at 60 Hz, 0.4N for 10 minutes. Three (3) received a spring and CLMF at 100 Hz, 0.4N for 10 minutes. Three (3) received a spring and CLMF at 200 Hz, 0.4N for 10 minutes. The control group contained six (6) rats, and received only a spring. An additional ix three (3) rats received CLMF (30 Hz, 0.4N, two times per week, for 10 minutes for 4 weeks) only, with no spring, and were used only for histological purposes. Rats were subjected to the application of orthodontic force from their maxillary left first molar to their left central incisor. In addition some of the rats received externally applied, cyclical, low magnitude force (CLMF) on their maxillary left first molar. micro-CT was used to measure the amount of orthodontic tooth movement. The distance between the maxillary first and second molars, at the most mesial point of the second molar and the most distal point of the first molar (1M-2M distance) were used to evaluate the distance of tooth movement. Immunohistochemistry was performed with TRAP staining and BrdU quantification. Externally applied, cyclical, low magnitude forces (CLMF) do appear to have an effect on the rate, while not significant, of orthodontic tooth movement in rats. It appears that lower CLMF decreases the rate of tooth movement, while higher CLMF increases the rate of tooth movement. Future studies with larger sample sizes are needed to clarify this issue. CLMF does not appear to affect the proliferation in PDL cells, and has no effect on the number of osteoclasts.
Resumo:
The objective of this research has been to study the molecular basis for chromosome aberration formation. Predicated on a variety of data, Mitomycin C (MMC)-induced DNA damage has been postulated to cause the formation of chromatid breaks (and gaps) by preventing the replication of regions of the genome prior to mitosis. The basic protocol for these experiments involved treating synchronized Hela cells in G(,1)-phase with a 1 (mu)g/ml dose of MMC for one hour. After removing the drug, cells were then allowed to progress to mitosis and were harvested for analysis by selective detachment. Utilizing the alkaline elution assay for DNA damage, evidence was obtained to support the conclusion that Hela cells can progress through S-phase into mitosis with intact DNA-DNA interstrand crosslinks. A higher level of crosslinking was observed in those cells remaining in interphase compared to those able to reach mitosis at the time of analysis. Dual radioisotope labeling experiments revealed that, at this dose, these crosslinks were associated to the same extent with both parental and newly replicated DNA. This finding was shown not to be the result of a two-step crosslink formation mechanism in which crosslink levels increase with time after drug treatment. It was also shown not to be an artefact of the double-labeling protocol. Using neutral CsCl density gradient ultracentrifugation of mitotic cells containing BrdU-labeled newly replicated DNA, control cells exhibited one major peak at a heavy/light density. However, MMC-treated cells had this same major peak at the heavy/light density, in addition to another minor peak at a density characteristic for light/light DNA. This was interpreted as indicating either: (1) that some parental DNA had not been replicated in the MMC treated sample or; (2) that a recombination repair mechanism was operational. To distinguish between these two possibilities, flow cytometric DNA fluorescence (i.e., DNA content) measurements of MMC-treated and control cells were made. These studies revealed that the mitotic cells that had been treated with MMC while in G(,1)-phase displayed a 10-20% lower DNA content than untreated control cells when measured under conditions that neutralize chromosome condensation effects (i.e., hypotonic treatment). These measurements were made under conditions in which the binding of the drug, MMC, was shown not to interfere with the stoichiometry of the ethidium bromide-mithramycin stain. At the chromosome level, differential staining techniques were used in an attempt to visualize unreplicated regions of the genome, but staining indicative of large unreplicated regions was not observed. These results are best explained by a recombinogenic mechanism. A model consistent with these results has been proposed.^
Resumo:
Recently, it has become apparent that DNA repair mechanisms are involved in the malignant progression and resistance to therapy of gliomas. Many investigators have shown that increased levels of O6-methyl guanine DNA alkyltransferase, a DNA monoalkyl adduct repair enzyme, are correlated with resistance of malignant glioma cell lines to nitrosourea-based chemotherapy. Three important DNA excision repair genes ERCC1 (excision repair cross complementation group 1), ERCC2 (excision repair cross complementation group 2), and ERCC6 (excision repair cross complementation group 6) have been studied in human tumors. Gene copy number variation of ERCC1 and ERCC2 has been observed in primary glioma tissues. A number of reports describing a relationship between ERCC1 gene alterations and resistance to anti-cancer drugs have been also described. The levels of ERCC1 gene expression, however, have not been correlated with drug resistance in gliomas. The expression of ERCC6 gene transcribes has been shown to vary with tissue types and to be highest in the brain. There have been no comprehensive studies so far, however, of ERCC6 gene expression and molecular alterations in malignant glioma. This project examined the ERCC1 expression levels and correlated them with cisplatin resistance in malignant glioma cell lines. We also examined the molecular alterations of ERCC6 gene in primary glioma tissues and cells and analyzed whether these alterations are related to tumor progression and chemotherapy resistance. Our results indicate the presence of mutations and/or deletions in exons II and V of the ERCC6 gene, and these alterations are more frequent in exon II. Furthermore, the mutations and/or deletions in exon II were shown to be associated with increased malignant grade of gliomas. The results on the Levels of ERCC1 gene transcripts showed that expression levels correlate with cisplatin resistance. The increase in ERCC1 mRNA induced by cisplatin could be down-regulated by cyclosporin A and herbimycin A. The results of this study are likely to provide useful information for clinical treatment of human gliomas. ^
Resumo:
This study is a retrospective longitudinal study at Texas Children's Hospital, a 350-bed tertiary level pediatric teaching hospital in Houston, Texas, for the period 1990 to 2006. It measured the incidence and trends of positive pre-employment drug tests among new job applicants At TCH. ^ Over the study period, 16,219 job applicants underwent pre-employment drug screening at TCH. Of these, 330 applicants (2%) tested positive on both the EMIT and GC/MS. After review by the medical review officer, the number of true drug test positive applicants decreased to 126 (0.78%). ^ According to the overall annual positive drug test incidence rates, the highest overall incidence was in 2002 (14.71 per 1000 tests) and the lowest in 2004 (3.17 per 1000 tests). Despite a marked increase in 2002, over the 15-year study period the overall incidence tended to decrease. Incidence rates and trends of other illegal drugs are further discussed in the study. And in general, these incidence rates also decline in the study period. In addition to that, we found the overall, positive drug tests were more common in females than in males (55.5% versus 44.4%). ^
Resumo:
5-aza-2'-deoxycytidine (DAC) is a cytidine analogue that strongly inhibits DNA methylation, and was recently approved for the treatment of myelodysplastic syndromes (MDS). To maximize clinical results with DAC, we investigated its use as an anti-cancer drug. We also investigated mechanisms of resistance to DAC in vitro in cancer cell lines and in vivo in MDS patients after relapse. We found DAC sensitized cells to the effect of 1-β-D-Arabinofuranosylcytosine (Ara-C). The combination of DAC and Ara-C or Ara-C following DAC showed additive or synergistic effects on cell death in four human leukemia cell lines in vitro, but antagonism in terms of global methylation. RIL gene activation and H3 lys-9 acetylation of short interspersed elements (Alu). One possible explanation is that hypomethylated cells are sensitized to cell killing by Ara-C. Turning to resistance, we found that the IC50 of DAC differed 1000 fold among and was correlated with the dose of DAC that induced peak hypomethylation of long interspersed nuclear elements (LINE) (r=0.94, P<0.001), but not with LINE methylation at baseline (r=0.05, P=0.97). Sensitivity to DAC did not significantly correlate with sensitivity to another hypomethylating agent 5-azacytidine (AZA) (r=0.44, P=0.11). The cell lines most resistant to DAC had low dCK, hENT1, and hENT2 transporters and high cytosine deaminase (CDA). In an HL60 leukemia cell line, resistance to DAC could be rapidly induced by drug exposure, and was related to a switch from monoallelic to biallelic mutation of dCK or a loss of wild type DCK allele. Furthermore, we showed that DAC induced DNA breaks evidenced by histone H2AX phosphorylation and increased homologous recombination rates 7-10 folds. Finally, we found there were no dCK mutations in MDS patients after relapse. Cytogenetics showed that three of the patients acquired new abnormalities at relapse. These data suggest that in vitro spontaneous and acquired resistance to DAC can be explained by insufficient incorporation of drug into DNA. In vivo resistance to DAC is likely due to methylation-independent pathways such as chromosome changes. The lack of cross resistance between DAC and AZA is of potential clinical relevance, as is the combination of DAC and Ara-C. ^
Resumo:
Opioids dominate the field of pain management because of their ability to provide analgesia in many medical circumstances. However, side effects including respiratory depression, constipation, tolerance, physical dependence, and the risk of addiction limit their clinical utility. Fear of these side effects results in the under-treatment of acute pain. For many years, research has focused on ways to improve the therapeutic index (the ratio of desirable analgesic effects to undesirable side effects) of opioids. One strategy, combining opioid agonists that bind to different opioid receptor types, may prove successful.^ We discovered that subcutaneous co-administration of a moderately analgesic dose of the mu-opioid receptor (MOR) selective agonist fentanyl (20μg/kg) with subanalgesic doses of the less MOR-specific agonist morphine (100ng/kg-100μg/kg), augmented acute fentanyl analgesia in rats. Parallel [35S]GTPγS binding studies using naïve rat substantia gelatinosa membrane treated with fentanyl (4μM) and morphine (1nM-1pM) demonstrated a 2-fold increase in total G-protein activation. This correlation between morphine-induced augmentation of fentanyl analgesia and G-protein activation led to our proposal that interactions between MORs and DORs underlie opioid-induced augmentation. We discovered that morphine-induced augmentation of fentanyl analgesia and G-protein activity was mediated by DORs. Adding the DOR-selective antagonist naltrindole (200ng/kg, 40nM) at doses that did not alter the analgesic or G-protein activation of fentanyl, blocked increases in analgesia and G-protein activation induced by fentanyl/morphine combinations. Equivalent doses of the MOR-selective antagonist cyprodime (20ng/kg, 4nM) did not block augmentation. Substitution of the DOR-selective agonist SNC80 for morphine yielded similar results, further supporting our conclusion that interactions between MORs and DORs are responsible for morphine-induced augmentation of fentanyl analgesia and G-protein activation. Confocal microscopy of rat substantia gelatinosa showed that changes in the rate of opioid receptor internalization did not account for these effects.^ In conclusion, fentanyl analgesia augmentation by subanalgesic morphine is mediated by increased G-protein activation resulting from functional interactions between MORs and DORs, not changes in MOR internalization. Additional animal and clinical studies are needed to determine whether side effect incidence changes following opioid co-administration. If side effect incidence decreases or remains unchanged, these findings could have important implications for clinical pain treatment. ^
Resumo:
Background. EAP programs for airline pilots in companies with a well developed recovery management program are known to reduce pilot absenteeism following treatment. Given the costs and safety consequences to society, it is important to identify pilots who may be experiencing an AOD disorder to get them into treatment. ^ Hypotheses. This study investigated the predictive power of workplace absenteeism in identifying alcohol or drug disorders (AOD). The first hypothesis was that higher absenteeism in a 12-month period is associated with higher risk that an employee is experiencing AOD. The second hypothesis was that AOD treatment would reduce subsequent absence rates and the costs of replacing pilots on missed flights. ^ Methods. A case control design using eight years (time period) of monthly archival absence data (53,000 pay records) was conducted with a sample of (N = 76) employees having an AOD diagnosis (cases) matched 1:4 with (N = 304) non-diagnosed employees (controls) of the same profession and company (male commercial airline pilots). Cases and controls were matched on the variables age, rank and date of hire. Absence rate was defined as sick time hours used over the sum of the minimum guarantee pay hours annualized using the months the pilot worked for the year. Conditional logistic regression was used to determine if absence predicts employees experiencing an AOD disorder, starting 3 years prior to the cases receiving the AOD diagnosis. A repeated measures ANOVA, t tests and rate ratios (with 95% confidence intervals) were conducted to determine differences between cases and controls in absence usage for 3 years pre and 5 years post treatment. Mean replacement costs were calculated for sick leave usage 3 years pre and 5 years post treatment to estimate the cost of sick leave from the perspective of the company. ^ Results. Sick leave, as measured by absence rate, predicted the risk of being diagnosed with an AOD disorder (OR 1.10, 95% CI = 1.06, 1.15) during the 12 months prior to receiving the diagnosis. Mean absence rates for diagnosed employees increased over the three years before treatment, particularly in the year before treatment, whereas the controls’ did not (three years, x = 6.80 vs. 5.52; two years, x = 7.81 vs. 6.30, and one year, x = 11.00cases vs. 5.51controls. In the first year post treatment compared to the year prior to treatment, rate ratios indicated a significant (60%) post treatment reduction in absence rates (OR = 0.40, CI = 0.28, 0.57). Absence rates for cases remained lower than controls for the first three years after completion of treatment. Upon discharge from the FAA and company’s three year AOD monitoring program, case’s absence rates increased slightly during the fourth year (controls, x = 0.09, SD = 0.14, cases, x = 0.12, SD = 0.21). However, the following year, their mean absence rates were again below those of the controls (controls, x = 0.08, SD = 0.12, cases, x¯ = 0.06, SD = 0.07). Significant reductions in costs associated with replacing pilots calling in sick, were found to be 60% less, between the year of diagnosis for the cases and the first year after returning to work. A reduction in replacement costs continued over the next two years for the treated employees. ^ Conclusions. This research demonstrates the potential for workplace absences as an active organizational surveillance mechanism to assist managers and supervisors in identifying employees who may be experiencing or at risk of experiencing an alcohol/drug disorder. Currently, many workplaces use only performance problems and ignore the employee’s absence record. A referral to an EAP or alcohol/drug evaluation based on the employee’s absence/sick leave record as incorporated into company policy can provide another useful indicator that may also carry less stigma, thus reducing barriers to seeking help. This research also confirms two conclusions heretofore based only on cross-sectional studies: (1) higher absence rates are associated with employees experiencing an AOD disorder; (2) treatment is associated with lower costs for replacing absent pilots. Due to the uniqueness of the employee population studied (commercial airline pilots) and the organizational documentation of absence, the generalizability of this study to other professions and occupations should be considered limited. ^ Transition to Practice. The odds ratios for the relationship between absence rates and an AOD diagnosis are precise; the OR for year of diagnosis indicates the likelihood of being diagnosed increases 10% for every hour change in sick leave taken. In practice, however, a pilot uses approximately 20 hours of sick leave for one trip, because the replacement will have to be paid the guaranteed minimum of 20 hour. Thus, the rate based on hourly changes is precise but not practical. ^ To provide the organization with practical recommendations the yearly mean absence rates were used. A pilot flies on average, 90 hours a month, 1080 annually. Cases used almost twice the mean rate of sick time the year prior to diagnosis (T-1) compared to controls (cases, x = .11, controls, x = .06). Cases are expected to use on average 119 hours annually (total annual hours*mean annual absence rate), while controls will use 60 hours. The cases’ 60 hours could translate to 3 trips of 20 hours each. Management could use a standard of 80 hours or more of sick time claimed in a year as the threshold for unacceptable absence, a 25% increase over the controls (a cost to the company of approximately of $4000). At the 80-hour mark, the Chief Pilot would be able to call the pilot in for a routine check as to the nature of the pilot’s excessive absence. This management action would be based on a company standard, rather than a behavioral or performance issue. Using absence data in this fashion would make it an active surveillance mechanism. ^
Resumo:
Risk factors for Multi-Drug Resistant Acinetobacter (MDRA) acquisition were studied in patients in a burn intensive care unit (ICU) where there was an outbreak of MDRA. Forty cases were matched with eighty controls based on length of stay in the Burn ICU and statistical analysis was performed on data for several different variables. Matched analysis showed that mechanical ventilation, transport ventilation, number of intubations, number of bronchoscopy procedures, total body surface area burn, and prior Methicillin Resistant Staphylococcus aureus colonization were all significant risk factors for MDRA acquisition. ^ MDRA remains a significant threat to the burn population. Treatment for burn patients with MDRA is challenging as resistance to antibiotics continues to increase. This study underlined the need to closely monitor the most critically ill ventilated patients during an outbreak of MDRA as they are the most at risk for MDRA acquisition.^
Resumo:
Treating patients with combined agents is a growing trend in cancer clinical trials. Evaluating the synergism of multiple drugs is often the primary motivation for such drug-combination studies. Focusing on the drug combination study in the early phase clinical trials, our research is composed of three parts: (1) We conduct a comprehensive comparison of four dose-finding designs in the two-dimensional toxicity probability space and propose using the Bayesian model averaging method to overcome the arbitrariness of the model specification and enhance the robustness of the design; (2) Motivated by a recent drug-combination trial at MD Anderson Cancer Center with a continuous-dose standard of care agent and a discrete-dose investigational agent, we propose a two-stage Bayesian adaptive dose-finding design based on an extended continual reassessment method; (3) By combining phase I and phase II clinical trials, we propose an extension of a single agent dose-finding design. We model the time-to-event toxicity and efficacy to direct dose finding in two-dimensional drug-combination studies. We conduct extensive simulation studies to examine the operating characteristics of the aforementioned designs and demonstrate the designs' good performances in various practical scenarios.^
Resumo:
My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.