179 resultados para Conditioning regimen
Resumo:
In fear conditioning, an animal learns to associate an unconditioned stimulus (US), such as a shock, and a conditioned stimulus (CS), such as a tone, so that the presentation of the CS alone can trigger conditioned responses. Recent research on the lateral amygdala has shown that following cued fear conditioning, only a subset of higher-excitable neurons are recruited in the memory trace. Their selective deletion after fear conditioning results in a selective erasure of the fearful memory. I hypothesize that the recruitment of highly excitable neurons depends on responsiveness to stimuli, intrinsic excitability and local connectivity. In addition, I hypothesize that neurons recruited for an initial memory also participate in subsequent memories, and that changes in neuronal excitability affect secondary fear learning. To address these hypotheses, I will show that A) a rat can learn to associate two successive short-term fearful memories; B) neuronal populations in the LA are competitively recruited in the memory traces depending on individual neuronal advantages, as well as advantages granted by the local network. By performing two successive cued fear conditioning experiments, I found that rats were able to learn and extinguish the two successive short-term memories, when tested 1 hour after learning for each memory. These rats were equipped with a system of stable extracellular recordings that I developed, which allowed to monitor neuronal activity during fear learning. 233 individual putative pyramidal neurons could modulate their firing rate in response to the conditioned tone (conditioned neurons) and/or non- conditioned tones (generalizing neurons). Out of these recorded putative pyramidal neurons 86 (37%) neurons were conditioned to one or both tones. More precisely, one population of neurons encoded for a shared memory while another group of neurons likely encoded the memories' new features. Notably, in spite of a successful behavioral extinction, the firing rate of those conditioned neurons in response to the conditioned tone remained unchanged throughout memory testing. Furthermore, by analyzing the pre-conditioning characteristics of the conditioned neurons, I determined that it was possible to predict neuronal recruitment based on three factors: 1) initial sensitivity to auditory inputs, with tone-sensitive neurons being more easily recruited than tone- insensitive neurons; 2) baseline excitability levels, with more highly excitable neurons being more likely to become conditioned; and 3) the number of afferent connections received from local neurons, with neurons destined to become conditioned receiving more connections than non-conditioned neurons. - En conditionnement de la peur, un animal apprend à associer un stimulus inconditionnel (SI), tel un choc électrique, et un stimulus conditionné (SC), comme un son, de sorte que la présentation du SC seul suffit pour déclencher des réflexes conditionnés. Des recherches récentes sur l'amygdale latérale (AL) ont montré que, suite au conditionnement à la peur, seul un sous-ensemble de neurones plus excitables sont recrutés pour constituer la trace mnésique. Pour apprendre à associer deux sons au même SI, je fais l'hypothèse que les neurones entrent en compétition afin d'être sélectionnés lors du recrutement pour coder la trace mnésique. Ce recrutement dépendrait d'un part à une activation facilité des neurones ainsi qu'une activation facilité de réseaux de neurones locaux. En outre, je fais l'hypothèse que l'activation de ces réseaux de l'AL, en soi, est suffisante pour induire une mémoire effrayante. Pour répondre à ces hypothèses, je vais montrer que A) selon un processus de mémoire à court terme, un rat peut apprendre à associer deux mémoires effrayantes apprises successivement; B) des populations neuronales dans l'AL sont compétitivement recrutées dans les traces mnésiques en fonction des avantages neuronaux individuels, ainsi que les avantages consentis par le réseau local. En effectuant deux expériences successives de conditionnement à la peur, des rats étaient capables d'apprendre, ainsi que de subir un processus d'extinction, pour les deux souvenirs effrayants. La mesure de l'efficacité du conditionnement à la peur a été effectuée 1 heure après l'apprentissage pour chaque souvenir. Ces rats ont été équipés d'un système d'enregistrements extracellulaires stables que j'ai développé, ce qui a permis de suivre l'activité neuronale pendant l'apprentissage de la peur. 233 neurones pyramidaux individuels pouvaient moduler leur taux d'activité en réponse au son conditionné (neurones conditionnés) et/ou au son non conditionné (neurones généralisant). Sur les 233 neurones pyramidaux putatifs enregistrés 86 (37%) d'entre eux ont été conditionnés à un ou deux tons. Plus précisément, une population de neurones code conjointement pour un souvenir partagé, alors qu'un groupe de neurones différent code pour de nouvelles caractéristiques de nouveaux souvenirs. En particulier, en dépit d'une extinction du comportement réussie, le taux de décharge de ces neurones conditionné en réponse à la tonalité conditionnée est resté inchangée tout au long de la mesure d'apprentissage. En outre, en analysant les caractéristiques de pré-conditionnement des neurones conditionnés, j'ai déterminé qu'il était possible de prévoir le recrutement neuronal basé sur trois facteurs : 1) la sensibilité initiale aux entrées auditives, avec les neurones sensibles aux sons étant plus facilement recrutés que les neurones ne répondant pas aux stimuli auditifs; 2) les niveaux d'excitabilité des neurones, avec les neurones plus facilement excitables étant plus susceptibles d'être conditionnés au son ; et 3) le nombre de connexions reçues, puisque les neurones conditionné reçoivent plus de connexions que les neurones non-conditionnés. Enfin, nous avons constaté qu'il était possible de remplacer de façon satisfaisante le SI lors d'un conditionnement à la peur par des injections bilatérales de bicuculline, un antagoniste des récepteurs de l'acide y-Aminobutirique.
Resumo:
BACKGROUND: Early virological failure of antiretroviral therapy associated with the selection of drug-resistant human immunodeficiency virus type 1 in treatment-naive patients is very critical, because virological failure significantly increases the risk of subsequent failures. Therefore, we evaluated the possible role of minority quasispecies of drug-resistant human immunodeficiency virus type 1, which are undetectable at baseline by population sequencing, with regard to early virological failure. METHODS: We studied 4 patients who experienced early virological failure of a first-line regimen of lamivudine, tenofovir, and either efavirenz or nevirapine and 18 control patients undergoing similar treatment without virological failure. The key mutations K65R, K103N, Y181C, M184V, and M184I in the reverse transcriptase were quantified by allele-specific real-time polymerase chain reaction performed on plasma samples before and during early virological treatment failure. RESULTS: Before treatment, none of the viruses showed any evidence of drug resistance in the standard genotype analysis. Minority quasispecies with either the M184V mutation or the M184I mutation were detected in 3 of 18 control patients. In contrast, all 4 patients whose treatment was failing had harbored drug-resistant viruses at low frequencies before treatment, with a frequency range of 0.07%-2.0%. A range of 1-4 mutations was detected in viruses from each patient. Most of the minority quasispecies were rapidly selected and represented the major virus population within weeks after the patients started antiretroviral therapy. All 4 patients showed good adherence to treatment. Nonnucleoside reverse-transcriptase inhibitor plasma concentrations were in normal ranges for all 4 patients at 2 separate assessment times. CONCLUSIONS: Minority quasispecies of drug-resistant viruses, detected at baseline, can rapidly outgrow and become the major virus population and subsequently lead to early therapy failure in treatment-naive patients who receive antiretroviral therapy regimens with a low genetic resistance barrier.
Resumo:
Introduction: The last twenty years has witnessed important changes in the field of obstetric analgesia and anesthesia. In 2007, we conducted a survey to obtain information regarding the clinical practice of obstetric anesthesia in our country. The main objective was to ascertain whether recent developments in obstetric anesthesia had been adequately implemented into current clinical practice. Methodology: A confidential questionnaire was sent to 391 identified wiss obstetric anesthetists. The questionnaire included 58 questions on 5 main topics: activity and organization of the obstetric unit, practice of labor analgesia, practice of anesthesia for caesarean section, prevention of aspiration syndrome, and pain treatment after cesarean section. Results: The response rate was 80% (311/391). 66% of the surveyed anesthetists worked in intermediate size obstetric units (500-1500 deliveries per year). An anesthetist was on site 24/24 hours in only 53% of the obstetric units. Epidural labor analgesia with low dose local anesthetics combined with opioids was used by 87% but only 30% used patient controlled epidural analgesia (PCEA). Spinal anesthesia was the first choice for elective and urgent cesarean section for 95% of the responders. Adequate prevention of aspiration syndrome was prescribed by 78%. After cesarean section, a multimodal analgesic regimen was prescribed by 74%. Conclusion: When comparing these results with those of the two previous Swiss surveys [1, 2], it clearly appears that Swiss obstetric anesthetists have progressively adapted their practice to current clinical recommendations. But this survey also revealed some insufficiencies: 1. Of the public health system: a. Insufficient number of obstetric anesthetists on site 24 hours/24. b. Lack of budget in some hospitals to purchase PCEA pumps. 2. Of individual medical practice: a. Frequent excessive dosage of hyperbaric bupivacaine during spinal anesthesia for cesarean section. b. Frequent use of cristalloid preload before spinal anesthesia for cesarean section. c. Frequent systematic use of opioids when inducing general anesthesia for cesarean section. d. Fentanyl as the first choice opioid during induction of general anesthesia for severe preeclampsia. In the future, wider and more systematic information campaigns by the mean of the Swiss Association of Obstetric Anesthesia (SAOA) should be able to correct these points.
Resumo:
The role of T regulatory cells (Treg) in the induction and maintenance of allograft tolerance is being studied to a great extent. In contrast, little is known on their potential to prevent graft rejection in the field of xenotransplantation, where acute vascular rejection mediated by cellular and humoral mechanisms and thrombotic microangiopathy still prevents long-term graft survival. In this regard, the induction of donor-specific tolerance through isolation and expansion of xenoantigen-specific recipient Treg is currently becoming a focus of interest. This review will summarize the present knowledge concerning Treg and their potential use in xenotransplantation describing in particular CD4(+)CD25(+)Foxp3(+) T cells, CD8(+)CD28(-) Treg, double negative CD4(-)CD8(-) T cells, and natural killer Treg. Although only studied in vitro so far, human CD4(+)CD25(+)Foxp3(+) Treg is currently the best characterized subpopulation of regulatory cells in xenotransplantation. CD8(+)CD28(-) Treg and double negative CD4(-)CD8(-) Treg also seem to be implicated in tolerance maintenance of xenografts. Finally, one study revealing a role for natural killer CD4(+)Valpha14(+) Treg in the prolongation of xenograft survival needs further confirmation. To our opinion, CD4(+)CD25(+)Foxp3(+) Treg are a promising candidate to protect xenografts. In contrast to cadaveric allotransplantation, the donor is known prior to xenotransplantation. This advantage allows the expansion of recipient Treg in a xenoantigen specific manner before transplantation.
Resumo:
Background: This study explores significant ones' implication before and after transplantation. Methods: Longitudinal semi-structured interviews were conducted in 64 patients awaiting all-organ transplantation. Among them, 58 patients spontaneously discussed the importance of their significant other in their daily support. Discourse analysis was applied. Findings: During the pre-transplantation period renal patients reported that significant others took part in dialysis treatment and participated to regimen adherence. After transplantation, quality of life improved and the couple dynamics returned to normal. Patients awaiting lung or heart transplantation were more heavily impaired. Significant others had to take over abandoned roles. After transplantation resuming normal life became gradually possible, but after one year either transplantation health benefits relieved physical, emotional and social loads, or complications maintained the level of stress on significant others. Discussion: Patients reported that significant others had to take over various responsibilities and were concerned about long-term stress that should be adequately supported.
Resumo:
BACKGROUND: In recent years, treatment options for human immunodeficiency virus type 1 (HIV-1) infection have changed from nonboosted protease inhibitors (PIs) to nonnucleoside reverse-transcriptase inhibitors (NNRTIs) and boosted PI-based antiretroviral drug regimens, but the impact on immunological recovery remains uncertain. METHODS: During January 1996 through December 2004 [corrected] all patients in the Swiss HIV Cohort were included if they received the first combination antiretroviral therapy (cART) and had known baseline CD4(+) T cell counts and HIV-1 RNA values (n = 3293). For follow-up, we used the Swiss HIV Cohort Study database update of May 2007 [corrected] The mean (+/-SD) duration of follow-up was 26.8 +/- 20.5 months. The follow-up time was limited to the duration of the first cART. CD4(+) T cell recovery was analyzed in 3 different treatment groups: nonboosted PI, NNRTI, or boosted PI. The end point was the absolute increase of CD4(+) T cell count in the 3 treatment groups after the initiation of cART. RESULTS: Two thousand five hundred ninety individuals (78.7%) initiated a nonboosted-PI regimen, 452 (13.7%) initiated an NNRTI regimen, and 251 (7.6%) initiated a boosted-PI regimen. Absolute CD4(+) T cell count increases at 48 months were as follows: in the nonboosted-PI group, from 210 to 520 cells/muL; in the NNRTI group, from 220 to 475 cells/muL; and in the boosted-PI group, from 168 to 511 cells/muL. In a multivariate analysis, the treatment group did not affect the response of CD4(+) T cells; however, increased age, pretreatment with nucleoside reverse-transcriptase inhibitors, serological tests positive for hepatitis C virus, Centers for Disease Control and Prevention stage C infection, lower baseline CD4(+) T cell count, and lower baseline HIV-1 RNA level were risk factors for smaller increases in CD4(+) T cell count. CONCLUSION: CD4(+) T cell recovery was similar in patients receiving nonboosted PI-, NNRTI-, and boosted PI-based cART.
Resumo:
Chronic administration of recombinant human erythropoietin (rHuEPO) can generate serious cardiovascular side effects such as arterial hypertension (HTA) in clinical and sport fields. It is hypothesized that nitric oxide (NO) can protect from noxious cardiovascular effects induced by chronic administration of rHuEPO. On this base, we studied the cardiovascular effects of chronic administration of rHuEPO in exercise-trained rats treated with an inhibitor of NO synthesis (L-NAME). Rats were treated or not with rHuEPO and/or L-NAME during 6 weeks. During the same period, rats were subjected to treadmill exercise. The blood pressure was measured weekly. Endothelial function of isolated aorta and small mesenteric arteries were studied and the morphology of the latter was investigated. L-NAME induced hypertension (197 ± 6 mmHg, at the end of the protocol). Exercise prevented the rise in blood pressure induced by L-NAME (170 ± 5 mmHg). However, exercise-trained rats treated with both rHuEPO and L-NAME developed severe hypertension (228 ± 9 mmHg). Furthermore, in these exercise-trained rats treated with rHuEPO/L-NAME, the acetylcholine-induced relaxation was markedly impaired in isolated aorta (60% of maximal relaxation) and small mesenteric arteries (53%). L-NAME hypertension induced an internal remodeling of small mesenteric arteries that was not modified by exercise, rHuEPO or both. Vascular ET-1 production was not increased in rHuEPO/L-NAME/training hypertensive rats. Furthermore, we observed that rHuEPO/L-NAME/training hypertensive rats died during the exercise or the recovery period (mortality 51%). Our findings suggest that the use of rHuEPO in sport, in order to improve physical performance, represents a high and fatal risk factor, especially with pre-existing cardiovascular risk.
Resumo:
BACKGROUND: Glioblastoma, the most common adult primary malignant brain tumor, confers poor prognosis (median survival of 15 months) notwithstanding aggressive treatment. Combination chemotherapy including carmustine (BCNU) or temozolomide (TMZ) with the MGMT inhibitor O6-benzylguanine (O6BG) has been used, but has been associated with dose-limiting hematopoietic toxicity. OBJECTIVE: To assess safety and efficacy of a retroviral vector encoding the O6BG-resistant MGMTP140K gene for transduction and autologous transplantation of hematopoietic stem cells (HSCs) in MGMT unmethylated, newly diagnosed glioblastoma patients in an attempt to chemoprotect bone marrowduring combination O6BG/TMZ therapy. METHODS: Three patients have been enrolled in the first cohort. Patients underwent standard radiation therapy without TMZ followed by G-CSF mobilization, apheresis, and conditioning with 600 mg/m2 BCNU prior to infusion of gene-modified cells. Posttransplant, patients were treated with 28-day cycles of single doseTMZ (472 mg/m2) with 48-hour intravenous O6BG (120 mg/m2 bolus, then 30 mg/m2/d). RESULTS: The BCNU dose was nonmyeloablative with ANC ,500/mL for ≤3 d and nadir thrombocytopenia of 28,000/mL. Gene marking in pre-infusion colony forming units (CFUs) was 70.6%, 79.0%, and 74.0% in Patients 1, 2, and 3, respectively, by CFU-PCR. Following engraftment, gene marking in white blood cells and sorted granulocytes ranged between 0.37-0.84 and 0.33-0.83 provirus copies, respectively, by real-time PCR. Posttransplant gene marking in CFUs from CD34-selected cells ranged from 28.5% to 47.4%. Patients have received 4, 3, and 2 cycles of O6BG/TMZ, respectively, with evidence for selection of gene-modified cells. One patient has received a single dose-escalated cycle at 590 mg/m2 TMZ. No additional extra-hematopoietic toxicity has been observed thus far and all three patients exhibit stable disease at 7-8 months since diagnosis CONCLUSIONS: We believe that these data demonstrate the feasibility of achieving significant engraftment of MGMTP140K-modified cells with a well-tolerated dose of BCNU. Further follow-up will determine whether this approach will allow for further dose escalation of TMZ and improved survival.
Resumo:
Glucocorticoids are used in an attempt to reduce brain edema secondary to head injury. Nevertheless, their usefulness remains uncertain and contradictory. In a randomized study of 24 children with severe head injury, urinary free cortisol was measured by radioimmunoassay. Twelve patients (group 1) received dexamethasone and 12 (group 2) did not. All patients were treated with a standardized regimen. In group 1 there was complete suppression of endogenous cortisol production. In group 2 free cortisol was up to 20-fold higher than under basal conditions and reached maximum values on days 1-3. Since the excretion of cortisol in urine reflects the production rate closely and is not influenced by liver function and barbiturates, the results in group 2 show that the endogenous production of steroids is an adequate reaction to severe head injury. Exogenous glucocorticoids are thus unlikely to have any more beneficial effects than endogenous cortisol.
Resumo:
Background: EATL is a rare subtype of peripheral T-cell lymphomas characterized by primarily intestinal localization and a frequent association with celiac disease. The prognosis is considered to be poor with conventional chemotherapy. Limited data is available on the efficacy of ASCT in this lymphoma subtype. Primary objective: was to study the outcome of ASCT as a consolidation or salvage strategy for EATL. The primary endpoint was overall survival (OS) and progression-free survival (PFS). Eligible patients were > 18 years who had received ASCT between 2000-2010 for EATL that was confirmed by review of written histopathology reports, and had sufficient information on disease history and follow-up available. The search strategy used the EBMT database to identify patients potentially fulfilling the eligibility criteria. An additional questionnaire was sent to individual transplant centres to confirm histological diagnosis (histopathology report or pathology review) as well as updated follow-up data. Patients and transplant characteristics were compared between groups using X2 test or Fisher's exact test for categorical variables and t-test or Mann-Whiney U-test for continuous variables. OS and PFS were estimated using the Kaplan-Meier product-limit estimate and compared by the log-rank test. Estimates for non-relapse mortality (NRM) and relapse or progression were calculated using cumulative incidence rates to accommodate competing risk and compared to Gray's test. Results: Altogether 138 patients were identified. Updated follow-up data was received from 74 patients (54 %) and histology report from 54 patients (39 %). In ten patients the diagnosis of EATL could not be adequately verified. Thus the final analysis included 44. There were 24 males and 20 females with a median age of 56 (35-72) years at the time of transplant. Twenty-five patients (57 %) had a history of celiac disease. Disease stage was I in nine patients (21 %), II in 14 patients (33 %) and IV in 19 patients (45 %). Twenty-four patients (55 %) were in the first CR or PR at the time of transplant. BEAM was used as a high-dose regimen in 36 patients (82 %) and all patients received peripheral blood grafts. The median follow-up for survivors was 46 (2-108) months from ASCT. Three patients died early from transplant-related reasons translating into a 2-year non-relapse mortality of 7 %. Relapse incidence at 4 years after ASCT was 39 %, with no events occurring beyond 2.5 years after ASCT. PFS and OS were 54 % and 59 % at four years, respectively. There was a trend for better OS in patients transplanted in the first CR or PR compared to more advanced disease status (70 % vs. 43 %, p=0.053). Of note, patients with a history of celiac disease had superior PFS (70 % vs. 35 %, p=0.02) and OS (70 % vs. 45 %, p=0.052) whilst age, gender, disease stage, B-symptoms at diagnosis or high-dose regimen were not associated with OS or PFS. Conclusions: This study shows for the first time in a larger patient sample that ASCT is feasible in selected patients with EATL and can yield durable disease control in a significant proportion of the patients. Patients transplanted in first CR or PR appear to do better than those transplanted later. ASCT should be considered in EATL patients responding to initial therapy.
Resumo:
BACKGROUND: Combination antiretroviral treatment (cART) has been very successful, especially among selected patients in clinical trials. The aim of this study was to describe outcomes of cART on the population level in a large national cohort. METHODS: Characteristics of participants of the Swiss HIV Cohort Study on stable cART at two semiannual visits in 2007 were analyzed with respect to era of treatment initiation, number of previous virologically failed regimens and self reported adherence. Starting ART in the mono/dual era before HIV-1 RNA assays became available was counted as one failed regimen. Logistic regression was used to identify risk factors for virological failure between the two consecutive visits. RESULTS: Of 4541 patients 31.2% and 68.8% had initiated therapy in the mono/dual and cART era, respectively, and been on treatment for a median of 11.7 vs. 5.7 years. At visit 1 in 2007, the mean number of previous failed regimens was 3.2 vs. 0.5 and the viral load was undetectable (<50 copies/ml) in 84.6% vs. 89.1% of the participants, respectively. Adjusted odds ratios of a detectable viral load at visit 2 for participants from the mono/dual era with a history of 2 and 3, 4, >4 previous failures compared to 1 were 0.9 (95% CI 0.4-1.7), 0.8 (0.4-1.6), 1.6 (0.8-3.2), 3.3 (1.7-6.6) respectively, and 2.3 (1.1-4.8) for >2 missed cART doses during the last month, compared to perfect adherence. From the cART era, odds ratios with a history of 1, 2 and >2 previous failures compared to none were 1.8 (95% CI 1.3-2.5), 2.8 (1.7-4.5) and 7.8 (4.5-13.5), respectively, and 2.8 (1.6-4.8) for >2 missed cART doses during the last month, compared to perfect adherence. CONCLUSIONS: A higher number of previous virologically failed regimens, and imperfect adherence to therapy were independent predictors of imminent virological failure.
Resumo:
Debris flow hazard modelling at medium (regional) scale has been subject of various studies in recent years. In this study, hazard zonation was carried out, incorporating information about debris flow initiation probability (spatial and temporal), and the delimitation of the potential runout areas. Debris flow hazard zonation was carried out in the area of the Consortium of Mountain Municipalities of Valtellina di Tirano (Central Alps, Italy). The complexity of the phenomenon, the scale of the study, the variability of local conditioning factors, and the lacking data limited the use of process-based models for the runout zone delimitation. Firstly, a map of hazard initiation probabilities was prepared for the study area, based on the available susceptibility zoning information, and the analysis of two sets of aerial photographs for the temporal probability estimation. Afterwards, the hazard initiation map was used as one of the inputs for an empirical GIS-based model (Flow-R), developed at the University of Lausanne (Switzerland). An estimation of the debris flow magnitude was neglected as the main aim of the analysis was to prepare a debris flow hazard map at medium scale. A digital elevation model, with a 10 m resolution, was used together with landuse, geology and debris flow hazard initiation maps as inputs of the Flow-R model to restrict potential areas within each hazard initiation probability class to locations where debris flows are most likely to initiate. Afterwards, runout areas were calculated using multiple flow direction and energy based algorithms. Maximum probable runout zones were calibrated using documented past events and aerial photographs. Finally, two debris flow hazard maps were prepared. The first simply delimits five hazard zones, while the second incorporates the information about debris flow spreading direction probabilities, showing areas more likely to be affected by future debris flows. Limitations of the modelling arise mainly from the models applied and analysis scale, which are neglecting local controlling factors of debris flow hazard. The presented approach of debris flow hazard analysis, associating automatic detection of the source areas and a simple assessment of the debris flow spreading, provided results for consequent hazard and risk studies. However, for the validation and transferability of the parameters and results to other study areas, more testing is needed.
Resumo:
Background Based on several experimental results and on a preliminary study, a trial was undertaken to assess the efficacy of adalimumab, a TNF-α inhibitor, in patients with radicular pain due to lumbar disc herniation. Methods A multicentre, double-blind, randomised controlled trial was conducted between May 2005 and December 2007 in Switzerland. Patients with acute (< 12 weeks) and severe (Oswestry Disability index > 50) radicular leg pain and imaging-confirmed lumbar disc herniation were randomised to receive as adjuvant therapy either two subcutaneous injections of adalimumab (40 mg) at 7 days interval or matching placebo. The primary outcome was leg pain, which was recorded every day for 10 days and at 6-weeks and 6- months based on a visual analogue scale (0 to 100). Results Of the 265 patients screened, 61 were enrolled (adalimumab= 31) and 4 were lost to follow-up. Over time, the evolution of leg pain was more favourable in the adalimumab group than in the placebo group (p<0.001). However, the effect size was relatively small and at last follow-up the difference was 13.8 (CI95% -11.5 - 39.0). In the adalimumab group twice as many patients fulfilled the criteria for "responders" and for "low residual disease impact" ( p<0.05) and fewer surgical discectomies were performed (6 versus 13, p=0.04). Conclusion The addition of a short course of adalimumab to the treatment regimen of patients suffering from acute and severe sciatica resulted in a small decrease in leg pain and in significantly fewer surgical procedures.
Resumo:
Imatinib (Glivec®) has transformed the treatment and short-term prognosis of chronic myeloid leukaemia (CML) and gastrointestinal stromal tumour (GIST). However, the treatment must be taken indefinitely and is not devoid of inconvenience and toxicity. Moreover, resistance or escape from disease control occurs in a significant number of patients. Imatinib is a substrate of the cytochromes P450 CYP3A4/5 and of the multidrug transporter P glycoprotein (product of the MDR1 gene), and is also bound to the alpha1-acid glycoprotein (AAG) in plasma. Considering the large inter-individual differences in the expression and function of those systems, the disposition and clinical activity of imatinib can be expected to vary widely among patients, calling for dosage individualisation. The aim of this exploratory study was to determine the average pharmacokinetic parameters characterizing the disposition of imatinib in the target population, to assess their inter-individual variability, and to identify influential factors affecting them. A total of 321 plasma concentrations were measured in 59 patients receiving Glivec® at diverse dosage regimens, using a validated chromatographic method developed for this study. The results were analysed by non-linear mixed effect modelling (NONMEM). A one-compartment model with first-order absorption described the data appropriately, with an average apparent clearance of 12.4 l/h, a volume of distribution of 268 l and an absorption constant of 0.47 h-1. The clearance was affected by body weight, age and sex. No influences of interacting drugs were found. DNA samples were used for pharmacogenetic explorations. The MDR1 polymorphism 3435C>T and the AAG phenotype appears to modulate the disposition of imatinib. Large inter-individual variability (CV %) remained unexplained by the demographic covariates considered, both on clearance (40%) and distribution volume (71%). Together with intra-patient variability (34%), this translates into an 8-fold width of the 90%-prediction interval of plasma concentrations expected under a fixed dosing regimen. This is a strong argument to further investigate the possible usefulness of a therapeutic drug monitoring programme for imatinib. It may help in individualising the dosing regimen before overt disease progression or observation of treatment toxicity, thus improving both the long-term therapeutic effectiveness and tolerability of this drug.
Resumo:
Introduction: CD22 is expressed on most B-non-Hodgkin lymphomas (NHL); inotuzumab ozogamicin (INO) is an anti-CD22 antibody conjugated to calicheamicin. This study evaluated the safety and tolerability of INO plus R-CVP in patients (pts) with relapsed/refractory CD22+ B-NHL. Efficacy data were also collected. Methods: Part 1 of this open-label study identified a maximum tolerated dose (MTD) of INO 0.8mg/m,2 on day 2 plus R-CVP (rituximab 375mg/m,2 cyclophosphamide 750mg/m,2 and vincristine 1.4mg/m,2 on day 1; prednisone 40mg/m,2 on days 1-5) every 21 days. Subsequently, pts were enrolled in the MTD confirmation cohort (part 2, n = 10), which required a dose-limiting toxicity rate of <33% in cycle 1 and <4 pts discontinuing prior to cycle 3 due to an adverse event (AE) in the MTD expansion cohort (part 3, n = 22), which explored preliminary activity. Results: Parts 2 and 3 enrolled 32 pts: 16 pts with diffuse large B-cell lymphoma, 15 with follicular lymphoma and one with mantle cell lymphoma. Median age was 64.5 years (range 44-81 years); 34% of pts had 1 prior regimen, 34% had 2, 28% had ≥3 and 3% had none (median 2; range 0-6).Median treatment duration was five cycles (range 1-6). Part 2 confirmed the MTD as standard dose R-CVP plus INO 0.8mg/m,2; 2/10 pts had a dose-limiting toxicity (grade 3 increased ALT/AST, grade 4 neutropenia requiring G-CSF). One pt discontinued because of an AE prior to cycle 3. Common treatment-related AEs were thrombocytopenia (78%), neutropenia (66%), fatigue (50%), leukopenia (50%), nausea (41%) and lymphopenia (38%); common grade 3/4 AEs were neutropenia (63%), thrombocytopenia (53%), leukopenia (38%) and lymphopenia (31%). There was one case of treatment-related fatal pneumonia with grade 4 neutropenia. Ten pts discontinued treatment due to AEs; thrombocytopenia/delayed platelet recovery was the leading cause (grade 1/2, n = 6; grade 3/4, n = 3). Objective response rate (ORR) was 77% (n = 24/31 evaluable pts), including 26% (n=8/31) with complete response (CR); three pts had stable disease. Of the pts with follicular lymphoma, ORR was 100% (n = 15/15), including seven pts with CR. Of the pts with diffuse large B-cell lymphoma, ORR was 60% (n = 9/16), including one pt with CR. Conclusions: Results suggest that INOplus R-CVP has acceptable toxicity and promising activity in relapsed/refractory CD22+ B-NHL. The most common grade 3/4 AEs were hematologic. Follow-up for progression-free and overall survival is ongoing.